Tu, S W; Eriksson, H; Gennari, J H; Shahar, Y; Musen, M A
1995-06-01
PROTEGE-II is a suite of tools and a methodology for building knowledge-based systems and domain-specific knowledge-acquisition tools. In this paper, we show how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients. To apply PROTEGE-II, (1) we construct a decomposable problem-solving method called episodic skeletal-plan refinement, (2) we build an application ontology that consists of the terms and relations in the domain, and of method-specific distinctions not already captured in the domain terms, and (3) we specify mapping relations that link terms from the application ontology to the domain-independent terms used in the problem-solving method. From the application ontology, we automatically generate a domain-specific knowledge-acquisition tool that is custom-tailored for the application. The knowledge-acquisition tool is used for the creation and maintenance of domain knowledge used by the problem-solving method. The general goal of the PROTEGE-II approach is to produce systems and components that are reusable and easily maintained. This is the rationale for constructing ontologies and problem-solving methods that can be composed from a set of smaller-grained methods and mechanisms. This is also why we tightly couple the knowledge-acquisition tools to the application ontology that specifies the domain terms used in the problem-solving systems. Although our evaluation is still preliminary, for the application task of providing protocol-based decision support, we show that these goals of reusability and easy maintenance can be achieved. We discuss design decisions and the tradeoffs that have to be made in the development of the system.
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Rixen, Daniel
1996-01-01
We present an optimal preconditioning algorithm that is equally applicable to the dual (FETI) and primal (Balancing) Schur complement domain decomposition methods, and which successfully addresses the problems of subdomain heterogeneities including the effects of large jumps of coefficients. The proposed preconditioner is derived from energy principles and embeds a new coarsening operator that propagates the error globally and accelerates convergence. The resulting iterative solver is illustrated with the solution of highly heterogeneous elasticity problems.
"Fast" Is Not "Real-Time": Designing Effective Real-Time AI Systems
NASA Astrophysics Data System (ADS)
O'Reilly, Cindy A.; Cromarty, Andrew S.
1985-04-01
Realistic practical problem domains (such as robotics, process control, and certain kinds of signal processing) stand to benefit greatly from the application of artificial intelligence techniques. These problem domains are of special interest because they are typified by complex dynamic environments in which the ability to select and initiate a proper response to environmental events in real time is a strict prerequisite to effective environmental interaction. Artificial intelligence systems developed to date have been sheltered from this real-time requirement, however, largely by virtue of their use of simplified problem domains or problem representations. The plethora of colloquial and (in general) mutually inconsistent interpretations of the term "real-time" employed by workers in each of these domains further exacerbates the difficul-ties in effectively applying state-of-the-art problem solving tech-niques to time-critical problems. Indeed, the intellectual waters are by now sufficiently muddied that the pursuit of a rigorous treatment of intelligent real-time performance mandates the redevelopment of proper problem perspective on what "real-time" means, starting from first principles. We present a simple but nonetheless formal definition of real-time performance. We then undertake an analysis of both conventional techniques and AI technology with respect to their ability to meet substantive real-time performance criteria. This analysis provides a basis for specification of problem-independent design requirements for systems that would claim real-time performance. Finally, we discuss the application of these design principles to a pragmatic problem in real-time signal understanding.
Domain decomposition algorithms and computation fluid dynamics
NASA Technical Reports Server (NTRS)
Chan, Tony F.
1988-01-01
In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.
The application of the Routh approximation method to turbofan engine models
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1977-01-01
The Routh approximation technique is applied in the frequency domain to a 16th order state variable turbofan engine model. The results obtained motivate the extension of the frequency domain formulation of the Routh method to the time domain to handle the state variable formulation directly. The time domain formulation is derived, and a characterization, which specifies all possible Routh similarity transformations, is given. The characterization is computed by the solution of two eigenvalue eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Generalized vector calculus on convex domain
NASA Astrophysics Data System (ADS)
Agrawal, Om P.; Xu, Yufeng
2015-06-01
In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.
Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation
NASA Astrophysics Data System (ADS)
Ventura, Jacopo; Romano, Marcello; Walter, Ulrich
2015-05-01
This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.
Inferring Domain-Domain Interactions from Protein-Protein Interactions with Formal Concept Analysis
Khor, Susan
2014-01-01
Identifying reliable domain-domain interactions will increase our ability to predict novel protein-protein interactions, to unravel interactions in protein complexes, and thus gain more information about the function and behavior of genes. One of the challenges of identifying reliable domain-domain interactions is domain promiscuity. Promiscuous domains are domains that can occur in many domain architectures and are therefore found in many proteins. This becomes a problem for a method where the score of a domain-pair is the ratio between observed and expected frequencies because the protein-protein interaction network is sparse. As such, many protein-pairs will be non-interacting and domain-pairs with promiscuous domains will be penalized. This domain promiscuity challenge to the problem of inferring reliable domain-domain interactions from protein-protein interactions has been recognized, and a number of work-arounds have been proposed. This paper reports on an application of Formal Concept Analysis to this problem. It is found that the relationship between formal concepts provides a natural way for rare domains to elevate the rank of promiscuous domain-pairs and enrich highly ranked domain-pairs with reliable domain-domain interactions. This piggybacking of promiscuous domain-pairs onto less promiscuous domain-pairs is possible only with concept lattices whose attribute-labels are not reduced and is enhanced by the presence of proteins that comprise both promiscuous and rare domains. PMID:24586450
NASA Technical Reports Server (NTRS)
Mittra, R.; Ko, W. L.; Rahmat-Samii, Y.
1979-01-01
This paper presents a brief review of some recent developments on the use of the spectral-domain approach for deriving high-frequency solutions to electromagnetics scattering and radiation problems. The spectral approach is not only useful for interpreting the well-known Keller formulas based on the geometrical theory of diffraction (GTD), it can also be employed for verifying the accuracy of GTD and other asymptotic solutions and systematically improving the results when such improvements are needed. The problem of plane wave diffraction by a finite screen or a strip is presented as an example of the application of the spectral-domain approach.
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Domain Anomaly Detection in Machine Perception: A System Architecture and Taxonomy.
Kittler, Josef; Christmas, William; de Campos, Teófilo; Windridge, David; Yan, Fei; Illingworth, John; Osman, Magda
2014-05-01
We address the problem of anomaly detection in machine perception. The concept of domain anomaly is introduced as distinct from the conventional notion of anomaly used in the literature. We propose a unified framework for anomaly detection which exposes the multifaceted nature of anomalies and suggest effective mechanisms for identifying and distinguishing each facet as instruments for domain anomaly detection. The framework draws on the Bayesian probabilistic reasoning apparatus which clearly defines concepts such as outlier, noise, distribution drift, novelty detection (object, object primitive), rare events, and unexpected events. Based on these concepts we provide a taxonomy of domain anomaly events. One of the mechanisms helping to pinpoint the nature of anomaly is based on detecting incongruence between contextual and noncontextual sensor(y) data interpretation. The proposed methodology has wide applicability. It underpins in a unified way the anomaly detection applications found in the literature. To illustrate some of its distinguishing features, in here the domain anomaly detection methodology is applied to the problem of anomaly detection for a video annotation system.
A systematic process for persuasive mobile healthcare applications
NASA Astrophysics Data System (ADS)
Qasim, Mustafa Moosa; Ahmad, Mazida; Omar, Mazni; Zulkifli, Abdul Nasir; Bakar, Juliana Aida Abu
2017-10-01
In recent years there has been an increased focus on persuasive design of mobile in the healthcare domain. However, most of the studies did not follow systematic processes while analysis and designing the persuasive technology applications, and they also failed to provide some of the relevant information needed to design the persuasive applications. Adding to this is a need for more guidance in order to set how the persuasive guidelines can be implemented, which also means that there is a need for a way to transform the persuasive components into software requirements and functionalities. Therefore, this paper proposes a general systematic process to be used independently of the problem domain in order to analyze the customers' significant requirements. Such domain is the obesity problem among Malaysian children, and the most significant treatment of this case is parents' involvement. To this end, this paper will apply a systematic process in monitoring the children's obesity status among parents.
A Comparison of Ffowcs Williams-Hawkings Solvers for Airframe Noise Applications
NASA Technical Reports Server (NTRS)
Lockard, David P.
2002-01-01
This paper presents a comparison between two implementations of the Ffowcs Williams and Hawkings equation for airframe noise applications. Airframe systems are generally moving at constant speed and not rotating, so these conditions are used in the current investigation. Efficient and easily implemented forms of the equations applicable to subsonic, rectilinear motion of all acoustic sources are used. The assumptions allow the derivation of a simple form of the equations in the frequency-domain, and the time-domain method uses the restrictions on the motion to reduce the work required to find the emission time. The comparison between the frequency domain method and the retarded time formulation reveals some of the advantages of the different approaches. Both methods are still capable of predicting the far-field noise from nonlinear near-field flow quantities. Because of the large input data sets and potentially large numbers of observer positions of interest in three-dimensional problems, both codes utilize the message passing interface to divide the problem among different processors. Example problems are used to demonstrate the usefulness and efficiency of the two schemes.
EUROPA2: Plan Database Services for Planning and Scheduling Applications
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; Frank, Jeremy; Jonsson, Ari; McGann, Conor
2004-01-01
NASA missions require solving a wide variety of planning and scheduling problems with temporal constraints; simple resources such as robotic arms, communications antennae and cameras; complex replenishable resources such as memory, power and fuel; and complex constraints on geometry, heat and lighting angles. Planners and schedulers that solve these problems are used in ground tools as well as onboard systems. The diversity of planning problems and applications of planners and schedulers precludes a one-size fits all solution. However, many of the underlying technologies are common across planning domains and applications. We describe CAPR, a formalism for planning that is general enough to cover a wide variety of planning and scheduling domains of interest to NASA. We then describe EUROPA(sub 2), a software framework implementing CAPR. EUROPA(sub 2) provides efficient, customizable Plan Database Services that enable the integration of CAPR into a wide variety of applications. We describe the design of EUROPA(sub 2) from the perspective of both modeling, customization and application integration to different classes of NASA missions.
Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft
NASA Astrophysics Data System (ADS)
Tsuchiya, Takeshi
This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.
A spring system method for a mesh generation problem
NASA Astrophysics Data System (ADS)
Romanov, A.
2018-04-01
A new direct method for the 2d-mesh generation for a simply-connected domain using a spring system is observed. The method can be used with other methods to modify a mesh for growing solid problems. Advantages and disadvantages of the method are shown. Different types of boundary conditions are explored. The results of modelling for different target domains are given. Some applications for composite materials are studied.
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-05-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
Transfer learning for visual categorization: a survey.
Shao, Ling; Zhu, Fan; Li, Xuelong
2015-05-01
Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition.
The Cr dependence problem of eigenvalues of the Laplace operator on domains in the plane
NASA Astrophysics Data System (ADS)
Haddad, Julian; Montenegro, Marcos
2018-03-01
The Cr dependence problem of multiple Dirichlet eigenvalues on domains is discussed for elliptic operators by regarding C r + 1-smooth one-parameter families of C1 perturbations of domains in Rn. As applications of our main theorem (Theorem 1), we provide a fairly complete description for all eigenvalues of the Laplace operator on disks and squares in R2 and also for its second eigenvalue on balls in Rn for any n ≥ 3. The central tool used in our proof is a degenerate implicit function theorem on Banach spaces (Theorem 2) of independent interest.
Stabilization and control of distributed systems with time-dependent spatial domains
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1990-01-01
This paper considers the problem of the stabilization and control of distributed systems with time-dependent spatial domains. The evolution of the spatial domains with time is described by a finite-dimensional system of ordinary differential equations, while the distributed systems are described by first-order or second-order linear evolution equations defined on appropriate Hilbert spaces. First, results pertaining to the existence and uniqueness of solutions of the system equations are presented. Then, various optimal control and stabilization problems are considered. The paper concludes with some examples which illustrate the application of the main results.
Physics-Aware Informative Coverage Planning for Autonomous Vehicles
2014-06-01
environment and find the optimal path connecting fixed nodes, which is equivalent to solving the Traveling Salesman Problem (TSP). While TSP is an NP...intended for application to USV harbor patrolling, it is applicable to many different domains. The problem of traveling over an area and gathering...environment. I. INTRODUCTION There are many applications that need persistent monitor- ing of a given area, requiring repeated travel over the area to
UXDs-Driven Transferring Method from TRIZ Solution to Domain Solution
NASA Astrophysics Data System (ADS)
Ma, Lihui; Cao, Guozhong; Chang, Yunxia; Wei, Zihui; Ma, Kai
The translation process from TRIZ solutions to domain solutions is an analogy-based process. TRIZ solutions, such as 40 inventive principles and the related cases, are medium-solutions for domain problems. Unexpected discoveries (UXDs) are the key factors to trigger designers to generate new ideas for domain solutions. The Algorithm of UXD resolving based on Means-Ends Analysis(MEA) is studied and an UXDs-driven transferring method from TRIZ solution to domain solution is formed. A case study shows the application of the process.
Using hybrid expert system approaches for engineering applications
NASA Technical Reports Server (NTRS)
Allen, R. H.; Boarnet, M. G.; Culbert, C. J.; Savely, R. T.
1987-01-01
In this paper, the use of hybrid expert system shells and hybrid (i.e., algorithmic and heuristic) approaches for solving engineering problems is reported. Aspects of various engineering problem domains are reviewed for a number of examples with specific applications made to recently developed prototype expert systems. Based on this prototyping experience, critical evaluations of and comparisons between commercially available tools, and some research tools, in the United States and Australia, and their underlying problem-solving paradigms are made. Characteristics of the implementation tool and the engineering domain are compared and practical software engineering issues are discussed with respect to hybrid tools and approaches. Finally, guidelines are offered with the hope that expert system development will be less time consuming, more effective, and more cost-effective than it has been in the past.
The Process of Solving Complex Problems
ERIC Educational Resources Information Center
Fischer, Andreas; Greiff, Samuel; Funke, Joachim
2012-01-01
This article is about Complex Problem Solving (CPS), its history in a variety of research domains (e.g., human problem solving, expertise, decision making, and intelligence), a formal definition and a process theory of CPS applicable to the interdisciplinary field. CPS is portrayed as (a) knowledge acquisition and (b) knowledge application…
2015-01-01
Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project. PMID:26339227
Shin, Yoonseok
2015-01-01
Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.
Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.
Musen, M. A.
1998-01-01
When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community. PMID:9929181
Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.
Musen, M A
1998-01-01
When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
NASA Technical Reports Server (NTRS)
Smith, Ralph C.
1994-01-01
A Galerkin method for systems of PDE's in circular geometries is presented with motivating problems being drawn from structural, acoustic, and structural acoustic applications. Depending upon the application under consideration, piecewise splines or Legendre polynomials are used when approximating the system dynamics with modifications included to incorporate the analytic solution decay near the coordinate singularity. This provides an efficient method which retains its accuracy throughout the circular domain without degradation at singularity. Because the problems under consideration are linear or weakly nonlinear with constant or piecewise constant coefficients, transform methods for the problems are not investigated. While the specific method is developed for the two dimensional wave equations on a circular domain and the equation of transverse motion for a thin circular plate, examples demonstrating the extension of the techniques to a fully coupled structural acoustic system are used to illustrate the flexibility of the method when approximating the dynamics of more complex systems.
Problem-Based Learning in Formal and Informal Learning Environments
ERIC Educational Resources Information Center
Shimic, Goran; Jevremovic, Aleksandar
2012-01-01
Problem-based learning (PBL) is a student-centered instructional strategy in which students solve problems and reflect on their experiences. Different domains need different approaches in the design of PBL systems. Therefore, we present one case study in this article: A Java Programming PBL. The application is developed as an additional module for…
Adaptation of interoperability standards for cross domain usage
NASA Astrophysics Data System (ADS)
Essendorfer, B.; Kerth, Christian; Zaschke, Christian
2017-05-01
As globalization affects most aspects of modern life, challenges of quick and flexible data sharing apply to many different domains. To protect a nation's security for example, one has to look well beyond borders and understand economical, ecological, cultural as well as historical influences. Most of the time information is produced and stored digitally and one of the biggest challenges is to receive relevant readable information applicable to a specific problem out of a large data stock at the right time. These challenges to enable data sharing across national, organizational and systems borders are known to other domains (e.g., ecology or medicine) as well. Solutions like specific standards have been worked on for the specific problems. The question is: what can the different domains learn from each other and do we have solutions when we need to interlink the information produced in these domains? A known problem is to make civil security data available to the military domain and vice versa in collaborative operations. But what happens if an environmental crisis leads to the need to quickly cooperate with civil or military security in order to save lives? How can we achieve interoperability in such complex scenarios? The paper introduces an approach to adapt standards from one domain to another and lines out problems that have to be overcome and limitations that may apply.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-01-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517
Sheikhtaheri, Abbas; Sadoughi, Farahnaz; Hashemi Dehaghi, Zahra
2014-09-01
Complicacy of clinical decisions justifies utilization of information systems such as artificial intelligence (e.g. expert systems and neural networks) to achieve better decisions, however, application of these systems in the medical domain faces some challenges. We aimed at to review the applications of these systems in the medical domain and discuss about such challenges. Following a brief introduction of expert systems and neural networks by representing few examples, the challenges of these systems in the medical domain are discussed. We found that the applications of expert systems and artificial neural networks have been increased in the medical domain. These systems have shown many advantages such as utilization of experts' knowledge, gaining rare knowledge, more time for assessment of the decision, more consistent decisions, and shorter decision-making process. In spite of all these advantages, there are challenges ahead of developing and using such systems including maintenance, required experts, inputting patients' data into the system, problems for knowledge acquisition, problems in modeling medical knowledge, evaluation and validation of system performance, wrong recommendations and responsibility, limited domains of such systems and necessity of integrating such systems into the routine work flows. We concluded that expert systems and neural networks can be successfully used in medicine; however, there are many concerns and questions to be answered through future studies and discussions.
Big Data: An Opportunity for Collaboration with Computer Scientists on Data-Driven Science
NASA Astrophysics Data System (ADS)
Baru, C.
2014-12-01
Big data technologies are evolving rapidly, driven by the need to manage ever increasing amounts of historical data; process relentless streams of human and machine-generated data; and integrate data of heterogeneous structure from extremely heterogeneous sources of information. Big data is inherently an application-driven problem. Developing the right technologies requires an understanding of the applications domain. Though, an intriguing aspect of this phenomenon is that the availability of the data itself enables new applications not previously conceived of! In this talk, we will discuss how the big data phenomenon creates an imperative for collaboration among domain scientists (in this case, geoscientists) and computer scientists. Domain scientists provide the application requirements as well as insights about the data involved, while computer scientists help assess whether problems can be solved with currently available technologies or require adaptaion of existing technologies and/or development of new technologies. The synergy can create vibrant collaborations potentially leading to new science insights as well as development of new data technologies and systems. The area of interface between geosciences and computer science, also referred to as geoinformatics is, we believe, a fertile area for interdisciplinary research.
NASA Technical Reports Server (NTRS)
Abolhassani, J. S.; Tiwari, S. N.
1983-01-01
The feasibility of the method of lines for solutions of physical problems requiring nonuniform grid distributions is investigated. To attain this, it is also necessary to investigate the stiffness characteristics of the pertinent equations. For specific applications, the governing equations considered are those for viscous, incompressible, two dimensional and axisymmetric flows. These equations are transformed from the physical domain having a variable mesh to a computational domain with a uniform mesh. The two governing partial differential equations are the vorticity and stream function equations. The method of lines is used to solve the vorticity equation and the successive over relaxation technique is used to solve the stream function equation. The method is applied to three laminar flow problems: the flow in ducts, curved-wall diffusers, and a driven cavity. Results obtained for different flow conditions are in good agreement with available analytical and numerical solutions. The viability and validity of the method of lines are demonstrated by its application to Navier-Stokes equations in the physical domain having a variable mesh.
SLEEC: Semantics-Rich Libraries for Effective Exascale Computation. Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milind, Kulkarni
SLEEC (Semantics-rich Libraries for Effective Exascale Computation) was a project funded by the Department of Energy X-Stack Program, award number DE-SC0008629. The initial project period was September 2012–August 2015. The project was renewed for an additional year, expiring August 2016. Finally, the project received a no-cost extension, leading to a final expiry date of August 2017. Modern applications, especially those intended to run at exascale, are not written from scratch. Instead, they are built by stitching together various carefully-written, hand-tuned libraries. Correctly composing these libraries is difficult, but traditional compilers are unable to effectively analyze and transform across abstraction layers.more » Domain specific compilers integrate semantic knowledge into compilers, allowing them to transform applications that use particular domain-specific languages, or domain libraries. But they do not help when new domains are developed, or applications span multiple domains. SLEEC aims to fix these problems. To do so, we are building generic compiler and runtime infrastructures that are semantics-aware but not domain-specific. By performing optimizations related to the semantics of a domain library, the same infrastructure can be made generic and apply across multiple domains.« less
Kaltenbacher, Barbara; Kaltenbacher, Manfred; Sim, Imbo
2013-01-01
We consider the second order wave equation in an unbounded domain and propose an advanced perfectly matched layer (PML) technique for its efficient and reliable simulation. In doing so, we concentrate on the time domain case and use the finite-element (FE) method for the space discretization. Our un-split-PML formulation requires four auxiliary variables within the PML region in three space dimensions. For a reduced version (rPML), we present a long time stability proof based on an energy analysis. The numerical case studies and an application example demonstrate the good performance and long time stability of our formulation for treating open domain problems. PMID:23888085
Beyond rules: The next generation of expert systems
NASA Technical Reports Server (NTRS)
Ferguson, Jay C.; Wagner, Robert E.
1987-01-01
The PARAGON Representation, Management, and Manipulation system is introduced. The concepts of knowledge representation, knowledge management, and knowledge manipulation are combined in a comprehensive system for solving real world problems requiring high levels of expertise in a real time environment. In most applications the complexity of the problem and the representation used to describe the domain knowledge tend to obscure the information from which solutions are derived. This inhibits the acquisition of domain knowledge verification/validation, places severe constraints on the ability to extend and maintain a knowledge base while making generic problem solving strategies difficult to develop. A unique hybrid system was developed to overcome these traditional limitations.
Planning with Continuous Resources in Stochastic Domains
NASA Technical Reports Server (NTRS)
Mausam, Mausau; Benazera, Emmanuel; Brafman, Roneu; Hansen, Eric
2005-01-01
We consider the problem of optimal planning in stochastic domains with metric resource constraints. Our goal is to generate a policy whose expected sum of rewards is maximized for a given initial state. We consider a general formulation motivated by our application domain--planetary exploration--in which the choice of an action at each step may depend on the current resource levels. We adapt the forward search algorithm AO* to handle our continuous state space efficiently.
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1997-01-01
Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.
Framework for Development of Object-Oriented Software
NASA Technical Reports Server (NTRS)
Perez-Poveda, Gus; Ciavarella, Tony; Nieten, Dan
2004-01-01
The Real-Time Control (RTC) Application Framework is a high-level software framework written in C++ that supports the rapid design and implementation of object-oriented application programs. This framework provides built-in functionality that solves common software development problems within distributed client-server, multi-threaded, and embedded programming environments. When using the RTC Framework to develop software for a specific domain, designers and implementers can focus entirely on the details of the domain-specific software rather than on creating custom solutions, utilities, and frameworks for the complexities of the programming environment. The RTC Framework was originally developed as part of a Space Shuttle Launch Processing System (LPS) replacement project called Checkout and Launch Control System (CLCS). As a result of the framework s development, CLCS software development time was reduced by 66 percent. The framework is generic enough for developing applications outside of the launch-processing system domain. Other applicable high-level domains include command and control systems and simulation/ training systems.
General Tricomi-Rassias problem and oblique derivative problem for generalized Chaplygin equations
NASA Astrophysics Data System (ADS)
Wen, Guochun; Chen, Dechang; Cheng, Xiuzhen
2007-09-01
Many authors have discussed the Tricomi problem for some second order equations of mixed type, which has important applications in gas dynamics. In particular, Bers proposed the Tricomi problem for Chaplygin equations in multiply connected domains [L. Bers, Mathematical Aspects of Subsonic and Transonic Gas Dynamics, Wiley, New York, 1958]. And Rassias proposed the exterior Tricomi problem for mixed equations in a doubly connected domain and proved the uniqueness of solutions for the problem [J.M. Rassias, Lecture Notes on Mixed Type Partial Differential Equations, World Scientific, Singapore, 1990]. In the present paper, we discuss the general Tricomi-Rassias problem for generalized Chaplygin equations. This is one general oblique derivative problem that includes the exterior Tricomi problem as a special case. We first give the representation of solutions of the general Tricomi-Rassias problem, and then prove the uniqueness and existence of solutions for the problem by a new method. In this paper, we shall also discuss another general oblique derivative problem for generalized Chaplygin equations.
AMPHION: Specification-based programming for scientific subroutine libraries
NASA Technical Reports Server (NTRS)
Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Waldinger, Richard; Stickel, Mark
1994-01-01
AMPHION is a knowledge-based software engineering (KBSE) system that guides a user in developing a diagram representing a formal problem specification. It then automatically implements a solution to this specification as a program consisting of calls to subroutines from a library. The diagram provides an intuitive domain oriented notation for creating a specification that also facilitates reuse and modification. AMPHION'S architecture is domain independent. AMPHION is specialized to an application domain by developing a declarative domain theory. Creating a domain theory is an iterative process that currently requires the joint expertise of domain experts and experts in automated formal methods for software development.
Incompressibility without tears - How to avoid restrictions of mixed formulation
NASA Technical Reports Server (NTRS)
Zienkiewicz, O. C.; Wu, J.
1991-01-01
Several time-stepping schemes for incompressibility problems are presented which can be solved directly for steady state or iteratively through the time domain. The difficulty of mixed interpolation is avoided by using these schemes. The schemes are applicable to problems of fluid and solid mechanics.
Model-Driven Approach for Body Area Network Application Development.
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-05-12
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.
Model-Driven Approach for Body Area Network Application Development
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-01-01
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394
NASA Astrophysics Data System (ADS)
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.
ERIC Educational Resources Information Center
Goldberg, Benjamin; Amburn, Charles; Ragusa, Charlie; Chen, Dar-Wei
2018-01-01
The U.S. Army is interested in extending the application of intelligent tutoring systems (ITS) beyond cognitive problem spaces and into psychomotor skill domains. In this paper, we present a methodology and validation procedure for creating expert model representations in the domain of rifle marksmanship. GIFT (Generalized Intelligent Framework…
A middle man approach to knowledge acquisition in expert systems
NASA Technical Reports Server (NTRS)
Jordan, Janice A.; Lin, Min-Jin; Mayer, Richard J.; Sterle, Mark E.
1990-01-01
The Weed Control Advisor (WCA) is a robust expert system that has been successfully implemented on an IBM AT class microcomputer in CLIPS. The goal of the WCA was to demonstrate the feasibility of providing an economical, efficient, user friendly system through which Texas rice producers could obtain expert level knowledge regarding herbicide application for weed control. During the development phase of the WCA, an improved knowledge acquisition method which we call the Middle Man Approach (MMA) was applied to facilitate the communication process between the domain experts and the knowledge engineer. The MMA served to circumvent the problems associated with the more traditional forms of knowledge acquisition by placing the Middle Man, a semi-expert in the problem domain with some computer expertise, at the site of system development. The middle man was able to contribute to system development in two major ways. First, the Middle Man had experience working in rice production and could assume many of the responsibilities normally performed by the domain experts such as explaining the background of the problem domain and determining the important relations. Second, the Middle Man was familiar with computers and worked closely with the system developers to update the rules after the domain experts reviewed the prototype, contribute to the help menus and explanation portions of the expert system, conduct the testing that is required to insure that the expert system gives the expected results answer questions in a timely way, help the knowledge engineer structure the domain knowledge into a useable form, and provide insight into the end user's profile which helped in the development of the simple user friendly interface. The final results were not only that both time expended and costs were greatly reduced by using the MMA, but the quality of the system was improved. This papa will introduce the WCA system and then discuss traditional knowledge acquisition along with some of the problems often associated with it, the MMA methodology, and its application to the WCA development.
A prototype case-based reasoning human assistant for space crew assessment and mission management
NASA Technical Reports Server (NTRS)
Owen, Robert B.; Holland, Albert W.; Wood, Joanna
1993-01-01
We present a prototype human assistant system for space crew assessment and mission management. Our system is based on case episodes from American and Russian space missions and analog environments such as polar stations and undersea habitats. The general domain of small groups in isolated and confined environments represents a near ideal application area for case-based reasoning (CBR) - there are few reliable rules to follow, and most domain knowledge is in the form of cases. We define the problem domain and outline a unique knowledge representation system driven by conflict and communication triggers. The prototype system is able to represent, index, and retrieve case studies of human performance. We index by social, behavioral, and environmental factors. We present the problem domain, our current implementation, our research approach for an operational system, and prototype performance and results.
Heat kernel for the elliptic system of linear elasticity with boundary conditions
NASA Astrophysics Data System (ADS)
Taylor, Justin; Kim, Seick; Brown, Russell
2014-10-01
We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.
For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...
Scalable software architectures for decision support.
Musen, M A
1999-12-01
Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.
Pervasive Sensing: Addressing the Heterogeneity Problem
NASA Astrophysics Data System (ADS)
O'Grady, Michael J.; Murdoch, Olga; Kroon, Barnard; Lillis, David; Carr, Dominic; Collier, Rem W.; O'Hare, Gregory M. P.
2013-06-01
Pervasive sensing is characterized by heterogeneity across a number of dimensions. This raises significant problems for those designing, implementing and deploying sensor networks, irrespective of application domain. Such problems include for example, issues of data provenance and integrity, security, and privacy amongst others. Thus engineering a network that is fit-for-purpose represents a significant challenge. In this paper, the issue of heterogeneity is explored from the perspective of those who seek to harness a pervasive sensing element in their applications. A initial solution is proposed based on the middleware construct.
A Runtime Performance Predictor for Selecting Tabu Tenures
NASA Technical Reports Server (NTRS)
Allen, John A.; Minton, Steven N.
1997-01-01
One of the drawbacks of parameter based systems, such as tabu search, is the difficulty of finding the correct parameter for a particular problem. Often, rule-of-thumb advice is given which may have little or no applicability to the domain or problem instance at hand. This paper describes the application of a general technique, Runtime Performance Predictors (RPP) which can be used to determine, in an efficient manner, the correct tabu tenure for a particular problem instance. The details of the approach and a demonstration using a variant of GSAT are presented.
The Lp Robin problem for Laplace equations in Lipschitz and (semi-)convex domains
NASA Astrophysics Data System (ADS)
Yang, Sibei; Yang, Dachun; Yuan, Wen
2018-01-01
Let n ≥ 3 and Ω be a bounded Lipschitz domain in Rn. Assume that p ∈ (2 , ∞) and the function b ∈L∞ (∂ Ω) is non-negative, where ∂Ω denotes the boundary of Ω. Denote by ν the outward unit normal to ∂Ω. In this article, the authors give two necessary and sufficient conditions for the unique solvability of the Robin problem for the Laplace equation Δu = 0 in Ω with boundary data ∂ u / ∂ ν + bu = f ∈Lp (∂ Ω), respectively, in terms of a weak reverse Hölder inequality with exponent p or the unique solvability of the Robin problem with boundary data in some weighted L2 (∂ Ω) space. As applications, the authors obtain the unique solvability of the Robin problem for the Laplace equation in the bounded (semi-)convex domain Ω with boundary data in (weighted) Lp (∂ Ω) for any given p ∈ (1 , ∞).
Jaarsveld, Saskia; Lachmann, Thomas
2017-01-01
This paper discusses the importance of three features of psychometric tests for cognition research: construct definition, problem space, and knowledge domain. Definition of constructs, e.g., intelligence or creativity, forms the theoretical basis for test construction. Problem space, being well or ill-defined, is determined by the cognitive abilities considered to belong to the constructs, e.g., convergent thinking to intelligence, divergent thinking to creativity. Knowledge domain and the possibilities it offers cognition are reflected in test results. We argue that (a) comparing results of tests with different problem spaces is more informative when cognition operates in both tests on an identical knowledge domain, and (b) intertwining of abilities related to both constructs can only be expected in tests developed to instigate such a process. Test features should guarantee that abilities can contribute to self-generated and goal-directed processes bringing forth solutions that are both new and applicable. We propose and discuss a test example that was developed to address these issues. PMID:28220098
A Domain Description Language for Data Processing
NASA Technical Reports Server (NTRS)
Golden, Keith
2003-01-01
We discuss an application of planning to data processing, a planning problem which poses unique challenges for domain description languages. We discuss these challenges and why the current PDDL standard does not meet them. We discuss DPADL (Data Processing Action Description Language), a language for describing planning domains that involve data processing. DPADL is a declarative, object-oriented language that supports constraints and embedded Java code, object creation and copying, explicit inputs and outputs for actions, and metadata descriptions of existing and desired data. DPADL is supported by the IMAGEbot system, which we are using to provide automation for an ecological forecasting application. We compare DPADL to PDDL and discuss changes that could be made to PDDL to make it more suitable for representing planning domains that involve data processing actions.
Three case studies of the GasNet model in discrete domains.
Santos, C L; de Oliveira, P P; Husbands, P; Souza, C R
2001-06-01
A new neural network model - the GasNet - has been recently reported in the literature, which, in addition to the traditional electric type, point-to-point communication between units, also uses communication through a diffilsable chemical modulator. Here we assess the applicability of this model in three different scenarios, the XOR problem, a food gathering task for a simulated robot, and a docking task for a virtual spaceship. All of them represent discrete domains, a contrast with the one where the GasNet was originally introduced, which had an essentially continuous nature. These scenarios are well-known benchmark problems from the literature and, since they exhibit varying degrees of complexity, they impose distinct performance demands on the GasNet. The experiments were primarily intended to better understand the model, by extending the original problem domain where GasNet was introduced. The results reported point at some difficulties with the current GasNet model.
Articulation Management for Intelligent Integration of Information
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; Clancy, Daniel (Technical Monitor)
2001-01-01
When combining data from distinct sources, there is a need to share meta-data and other knowledge about various source domains. Due to semantic inconsistencies and heterogeneity of representations, problems arise in combining multiple domains when the domains are merged. The knowledge that is irrelevant to the task of interoperation will be included, making the result unnecessarily complex. This heterogeneity problem can be eliminated by mediating the conflicts and managing the intersections of the domains. For interoperation and intelligent access to heterogeneous information, the focus is on the intersection of the knowledge, since intersection will define the required articulation rules. An algebra over domain has been proposed to use articulation rules to support disciplined manipulation of domain knowledge resources. The objective of a domain algebra is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The algebra supports formally the tasks of selecting, combining, extending, specializing, and modifying Components from a diverse set of domains. This paper presents a domain algebra and demonstrates the use of articulation rules to link declarative interfaces for Internet and enterprise applications. In particular, it discusses the articulation implementation as part of a production system capable of operating over the domain described by the IDL (interface description language) of objects registered in multiple CORBA servers.
Capture and playback synchronization in video conferencing
NASA Astrophysics Data System (ADS)
Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song
1995-03-01
Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.
NASA Astrophysics Data System (ADS)
Hu, Yanpu; Egbert, Gary; Ji, Yanju; Fang, Guangyou
2017-01-01
In this study, we apply fictitious wave domain (FWD) methods, based on the correspondence principle for the wave and diffusion fields, to finite difference (FD) modeling of transient electromagnetic (TEM) diffusion problems for geophysical applications. A novel complex frequency shifted perfectly matched layer (PML) boundary condition is adapted to the FWD to truncate the computational domain, with the maximum electromagnetic wave propagation velocity in the FWD used to set the absorbing parameters for the boundary layers. Using domains of varying spatial extent we demonstrate that these boundary conditions offer significant improvements over simpler PML approaches, which can result in spurious reflections and large errors in the FWD solutions, especially for low frequencies and late times. In our development, resistive air layers are directly included in the FWD, allowing simulation of TEM responses in the presence of topography, as is commonly encountered in geophysical applications. We compare responses obtained by our new FD-FWD approach and with the spectral Lanczos decomposition method on 3-D resistivity models of varying complexity. The comparisons demonstrate that our absorbing boundary condition in FWD for the TEM diffusion problems works well even in complex high-contrast conductivity models.
Overdetermined elliptic problems in topological disks
NASA Astrophysics Data System (ADS)
Mira, Pablo
2018-06-01
We introduce a method, based on the Poincaré-Hopf index theorem, to classify solutions to overdetermined problems for fully nonlinear elliptic equations in domains diffeomorphic to a closed disk. Applications to some well-known nonlinear elliptic PDEs are provided. Our result can be seen as the analogue of Hopf's uniqueness theorem for constant mean curvature spheres, but for the general analytic context of overdetermined elliptic problems.
Application of Modern Control Design Methodologies to a Multi-Segmented Deformable Mirror System
1991-05-23
state matrices, and the state equations are X= Ax + Bu (2.3) y = Cm + Du (2.4) The only dynamics modeled are associated with the six segment phasing...relationship between the L 2 and H2 spaces, the vector H2 norm can be found from the application of Parseval’s Theorem to Equation 3.1, yielding V112...of this minimization problem can be found using Riccati equations {1]. ’With a slight abuse of notation, time domain functions and frequency domain
Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm
NASA Technical Reports Server (NTRS)
LeTallec, Patrick; Tidriri, Moulay D.
1996-01-01
In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.
Application of Component Scoring to a Complicated Cognitive Domain.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Yamamoto, Kentaro
This study used the Montague-Riley Test to introduce a new scoring procedure that revealed errors in cognitive processes occurring at subcomponents of an electricity problem. The test, consisting of four parts with 36 open-ended problems each, was administered to 250 high school students. A computer program, ELTEST, was written applying a…
A UML-based meta-framework for system design in public health informatics.
Orlova, Anna O; Lehmann, Harold
2002-01-01
The National Agenda for Public Health Informatics calls for standards in data and knowledge representation within public health, which requires a multi-level framework that links all aspects of public health. The literature of public health informatics and public health informatics application were reviewed. A UML-based systems analysis was performed. Face validity of results was evaluated in analyzing the public health domain of lead poisoning. The core class of the UML-based system of public health is the Public Health Domain, which is associated with multiple Problems, for which Actors provide Perspectives. Actors take Actions that define, generate, utilize and/or evaluate Data Sources. The life cycle of the domain is a sequence of activities attributed to its problems that spirals through multiple iterations and realizations within a domain. The proposed Public Health Informatics Meta-Framework broadens efforts in applying informatics principles to the field of public health
ben-Avraham, D; Fokas, A S
2001-07-01
A new transform method for solving boundary value problems for linear and integrable nonlinear partial differential equations recently introduced in the literature is used here to obtain the solution of the modified Helmholtz equation q(xx)(x,y)+q(yy)(x,y)-4 beta(2)q(x,y)=0 in the triangular domain 0< or =x< or =L-y< or =L, with mixed boundary conditions. This solution is applied to the problem of diffusion-limited coalescence, A+A<==>A, in the segment (-L/2,L/2), with traps at the edges.
Challenging Aerospace Problems for Intelligent Systems
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje; Kanashige, John; Satyadas, A.; Clancy, Daniel (Technical Monitor)
2002-01-01
In this paper we highlight four problem domains that are well suited and challenging for intelligent system technologies. The problems are defined and an outline of a probable approach is presented. No attempt is made to define the problems as test cases. In other words, no data or set of equations that a user can code and get results are provided. The main idea behind this paper is to motivate intelligent system researchers to examine problems that will elevate intelligent system technologies and applications to a higher level.
Challenging Aerospace Problems for Intelligent Systems
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Kanashige, J.; Satyadas, A.
2003-01-01
In this paper we highlight four problem domains that are well suited and challenging for intelligent system technologies. The problems are defined and an outline of a probable approach is presented. No attempt is made to define the problems as test cases. In other words, no data or set of equations that a user can code and get results are provided. The main idea behind this paper is to motivate intelligent system researchers to examine problems that will elevate intelligent system technologies and applications to a higher level.
[Application of CWT to extract characteristic monitoring parameters during spine surgery].
Chen, Penghui; Wu, Baoming; Hu, Yong
2005-10-01
It is necessary to monitor intraoperative spinal function in order to prevent spinal neurological deficit during spine surgery. This study aims to extract characteristic electrophysiological monitoring parameters during surgical treatment of scoliosis. The problem, "the monitoring parameters in time domain are of great variability and are sensitive to noise", may also be solved in this study. By use of continuous wavelet transform to analyze the intraoperative cortical somatosensory evoked potential (CSEP), three new characteristic monitoring parameters in time-frequency domain (TFD) are extracted. The results indicate that the variability of CSEP characteristic parameters in TFD is lower than the variability of those in time domain. Therefore, the TFD characteristic monitoring parameters are more stable and reliable parameters of latency and amplitude in time domain. The application of TFD monitoring parameters during spine surgery may avoid spinal injury effectively.
An Evaluation of Database Solutions to Spatial Object Association
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, V S; Kurc, T; Saltz, J
2008-06-24
Object association is a common problem encountered in many applications. Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two datasets based on their positions in a common spatial coordinate system--one of the datasets may correspond to a catalog of objects observed over time in a multi-dimensional domain; the other dataset may consist of objects observed in a snapshot of the domain at a time point. The use of database management systems to the solve the object association problem provides portability across different platforms and also greater flexibility. Increasingmore » dataset sizes in today's applications, however, have made object association a data/compute-intensive problem that requires targeted optimizations for efficient execution. In this work, we investigate how database-based crossmatch algorithms can be deployed on different database system architectures and evaluate the deployments to understand the impact of architectural choices on crossmatch performance and associated trade-offs. We investigate the execution of two crossmatch algorithms on (1) a parallel database system with active disk style processing capabilities, (2) a high-throughput network database (MySQL Cluster), and (3) shared-nothing databases with replication. We have conducted our study in the context of a large-scale astronomy application with real use-case scenarios.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Numerical time-domain electromagnetics based on finite-difference and convolution
NASA Astrophysics Data System (ADS)
Lin, Yuanqu
Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
Patrick, Christopher J; Venables, Noah C; Yancey, James R; Hicks, Brian M; Nelson, Lindsay D; Kramer, Mark D
2013-08-01
A crucial challenge in efforts to link psychological disorders to neural systems, with the aim of developing biologically informed conceptions of such disorders, is the problem of method variance (Campbell & Fiske, 1959). Since even measures of the same construct in differing domains correlate only moderately, it is unsurprising that large sample studies of diagnostic biomarkers yield only modest associations. To address this challenge, a construct-network approach is proposed in which psychometric operationalizations of key neurobehavioral constructs serve as anchors for identifying neural indicators of psychopathology-relevant dispositions, and as vehicles for bridging between domains of clinical problems and neurophysiology. An empirical illustration is provided for the construct of inhibition-disinhibition, which is of central relevance to problems entailing deficient impulse control. Findings demonstrate that: (1) a well-designed psychometric index of trait disinhibition effectively predicts externalizing problems of multiple types, (2) this psychometric measure of disinhibition shows reliable brain response correlates, and (3) psychometric and brain-response indicators can be combined to form a joint psychoneurometric factor that predicts effectively across clinical and physiological domains. As a methodology for bridging between clinical problems and neural systems, the construct-network approach provides a concrete means by which existing conceptions of psychological disorders can accommodate and be reshaped by neurobiological insights. PsycINFO Database Record (c) 2013 APA, all rights reserved.
A contact algorithm for shell problems via Delaunay-based meshing of the contact domain
NASA Astrophysics Data System (ADS)
Kamran, K.; Rossi, R.; Oñate, E.
2013-07-01
The simulation of the contact within shells, with all of its different facets, represents still an open challenge in Computational Mechanics. Despite the effort spent in the development of techniques for the simulation of general contact problems, an all-seasons algorithm applicable to complex shell contact problems is yet to be developed. This work focuses on the solution of the contact between thin shells by using a technique derived from the particle finite element method together with a rotation-free shell triangle. The key concept is to define a discretization of the contact domain (CD) by constructing a finite element mesh of four-noded tetrahedra that describes the potential contact volume. The problem is completed by using an assumed-strain approach to define an elastic contact strain over the CD.
Image feature extraction in encrypted domain with privacy-preserving SIFT.
Hsu, Chao-Yung; Lu, Chun-Shien; Pei, Soo-Chang
2012-11-01
Privacy has received considerable attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario where the server is resource-abundant, and is capable of finishing the designated tasks. It is envisioned that secure media applications with privacy preservation will be treated seriously. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain. As all of the operations in SIFT must be moved to the encrypted domain, we propose a privacy-preserving realization of the SIFT method based on homomorphic encryption. We show through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack. Experimental results obtained from different case studies demonstrate that the proposed homomorphic encryption-based privacy-preserving SIFT performs comparably to the original SIFT and that our method is useful in SIFT-based privacy-preserving applications.
The Buffer Diagnostic Prototype: A fault isolation application using CLIPS
NASA Technical Reports Server (NTRS)
Porter, Ken
1994-01-01
This paper describes problem domain characteristics and development experiences from using CLIPS 6.0 in a proof-of-concept troubleshooting application called the Buffer Diagnostic Prototype. The problem domain is a large digital communications subsystems called the real-time network (RTN), which was designed to upgrade the launch processing system used for shuttle support at KSC. The RTN enables up to 255 computers to share 50,000 data points with millisecond response times. The RTN's extensive built-in test capability but lack of any automatic fault isolation capability presents a unique opportunity for a diagnostic expert system application. The Buffer Diagnostic Prototype addresses RTN diagnosis with a multiple strategy approach. A novel technique called 'faulty causality' employs inexact qualitative models to process test results. Experimental knowledge provides a capability to recognize symptom-fault associations. The implementation utilizes rule-based and procedural programming techniques, including a goal-directed control structure and simple text-based generic user interface that may be reusable for other rapid prototyping applications. Although limited in scope, this project demonstrates a diagnostic approach that may be adapted to troubleshoot a broad range of equipment.
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
NASA Astrophysics Data System (ADS)
Hu, Y.; Ji, Y.; Egbert, G. D.
2015-12-01
The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-08-28
Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
Application of the SNoW machine learning paradigm to a set of transportation imaging problems
NASA Astrophysics Data System (ADS)
Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir
2012-01-01
Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.
2007-03-01
Intelligence AIS Artificial Immune System ANN Artificial Neural Networks API Application Programming Interface BFS Breadth-First Search BIS Biological...problem domain is too large for only one algorithm’s application . It ranges from network - based sniffer systems, responsible for Enterprise-wide coverage...options to network administrators in choosing detectors to employ in future ID applications . Objectives Our hypothesis validity is based on a set
ERIC Educational Resources Information Center
National Special Media Institutes.
The five papers which comprise this volume share a common interest in the relationship of the problems of instructional technology to the insights of the behavioral sciences. The first chapter is concerned with the applications of present knowledge and empirical methodology to the solution of particular behavioral problems, an activity that…
GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain
NASA Astrophysics Data System (ADS)
Huang, Lan; Du, Youfu; Chen, Gongyang
2015-03-01
Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.
Domain Adaptation for Pedestrian Detection Based on Prediction Consistency
Huan-ling, Tang; Zhi-yong, An
2014-01-01
Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene. PMID:25013850
The boundary element method applied to 3D magneto-electro-elastic dynamic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.
2017-11-01
Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.
Solution and reasoning reuse in space planning and scheduling applications
NASA Technical Reports Server (NTRS)
Verfaillie, Gerard; Schiex, Thomas
1994-01-01
In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.
Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations
NASA Astrophysics Data System (ADS)
Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane
2018-04-01
Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.
Wavelet-domain de-noising technique for THz pulsed spectroscopy
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Gavdush, Arsenii A.; Fokina, Irina N.; Karasik, Valeriy E.; Reshetov, Igor V.; Kudrin, Konstantin G.; Nosov, Pavel A.; Yurchenko, Stanislav O.
2014-09-01
De-noising of terahertz (THz) pulsed spectroscopy (TPS) data is an essential problem, since a noise in the TPS system data prevents correct reconstruction of the sample spectral dielectric properties and to perform the sample internal structure studying. There are certain regions in TPS signal Fourier spectrum, where Fourier-domain signal-to-noise ratio is relatively small. Effective de-noising might potentially expand the range of spectrometer spectral sensitivity and reduce the time of waveform registration, which is an essential problem for biomedical applications of TPS. In this work, it is shown how the recent progress in signal processing in wavelet-domain could be used for TPS waveforms de-noising. It demonstrates the ability to perform effective de-noising of TPS data using the algorithm of the Fast Wavelet Transform (FWT). The results of the optimal wavelet basis selection and wavelet-domain thresholding technique selection are reported. Developed technique is implemented for reconstruction of in vivo healthy and deseased skin samplesspectral characteristics at THz frequency range.
Salmon, P; Williamson, A; Lenné, M; Mitsopoulos-Rubens, E; Rudin-Brown, C M
2010-08-01
Safety-compromising accidents occur regularly in the led outdoor activity domain. Formal accident analysis is an accepted means of understanding such events and improving safety. Despite this, there remains no universally accepted framework for collecting and analysing accident data in the led outdoor activity domain. This article presents an application of Rasmussen's risk management framework to the analysis of the Lyme Bay sea canoeing incident. This involved the development of an Accimap, the outputs of which were used to evaluate seven predictions made by the framework. The Accimap output was also compared to an analysis using an existing model from the led outdoor activity domain. In conclusion, the Accimap output was found to be more comprehensive and supported all seven of the risk management framework's predictions, suggesting that it shows promise as a theoretically underpinned approach for analysing, and learning from, accidents in the led outdoor activity domain. STATEMENT OF RELEVANCE: Accidents represent a significant problem within the led outdoor activity domain. This article presents an evaluation of a risk management framework that can be used to understand such accidents and to inform the development of accident countermeasures and mitigation strategies for the led outdoor activity domain.
Sim, Jaehyun; Sim, Jun; Park, Eunsung; Lee, Julian
2015-06-01
Many proteins undergo large-scale motions where relatively rigid domains move against each other. The identification of rigid domains, as well as the hinge residues important for their relative movements, is important for various applications including flexible docking simulations. In this work, we develop a method for protein rigid domain identification based on an exhaustive enumeration of maximal rigid domains, the rigid domains not fully contained within other domains. The computation is performed by mapping the problem to that of finding maximal cliques in a graph. A minimal set of rigid domains are then selected, which cover most of the protein with minimal overlap. In contrast to the results of existing methods that partition a protein into non-overlapping domains using approximate algorithms, the rigid domains obtained from exact enumeration naturally contain overlapping regions, which correspond to the hinges of the inter-domain bending motion. The performance of the algorithm is demonstrated on several proteins. © 2015 Wiley Periodicals, Inc.
Computational Electronics and Electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeFord, J.F.
The Computational Electronics and Electromagnetics thrust area is a focal point for computer modeling activities in electronics and electromagnetics in the Electronics Engineering Department of Lawrence Livermore National Laboratory (LLNL). Traditionally, they have focused their efforts in technical areas of importance to existing and developing LLNL programs, and this continues to form the basis for much of their research. A relatively new and increasingly important emphasis for the thrust area is the formation of partnerships with industry and the application of their simulation technology and expertise to the solution of problems faced by industry. The activities of the thrust areamore » fall into three broad categories: (1) the development of theoretical and computational models of electronic and electromagnetic phenomena, (2) the development of useful and robust software tools based on these models, and (3) the application of these tools to programmatic and industrial problems. In FY-92, they worked on projects in all of the areas outlined above. The object of their work on numerical electromagnetic algorithms continues to be the improvement of time-domain algorithms for electromagnetic simulation on unstructured conforming grids. The thrust area is also investigating various technologies for conforming-grid mesh generation to simplify the application of their advanced field solvers to design problems involving complicated geometries. They are developing a major code suite based on the three-dimensional (3-D), conforming-grid, time-domain code DSI3D. They continue to maintain and distribute the 3-D, finite-difference time-domain (FDTD) code TSAR, which is installed at several dozen university, government, and industry sites.« less
Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment
Prevost, Luanna B.; Lemons, Paula P.
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021
Morgan, R M
2017-11-01
This paper builds on the FoRTE conceptual model presented in part I to address the forms of knowledge that are integral to the four components of the model. Articulating the different forms of knowledge within effective forensic reconstructions is valuable. It enables a nuanced approach to the development and use of evidence bases to underpin decision-making at every stage of a forensic reconstruction by enabling transparency in the reporting of inferences. It also enables appropriate methods to be developed to ensure quality and validity. It is recognised that the domains of practice, research, and policy/law intersect to form the nexus where forensic science is situated. Each domain has a distinctive infrastructure that influences the production and application of different forms of knowledge in forensic science. The channels that can enable the interaction between these domains, enhance the impact of research in theory and practice, increase access to research findings, and support quality are presented. The particular strengths within the different domains to deliver problem solving forensic reconstructions are thereby identified and articulated. It is argued that a conceptual understanding of forensic reconstruction that draws on the full range of both explicit and tacit forms of knowledge, and incorporates the strengths of the different domains pertinent to forensic science, offers a pathway to harness the full value of trace evidence for context sensitive, problem-solving forensic applications. Copyright © 2017 The Author. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Slabon, Wayne A.; Richards, Randy L.; Dennen, Vanessa P.
2014-01-01
In this paper, we introduce restorying, a pedagogical approach based on social constructivism that employs successive iterations of rewriting and discussing personal, student-generated, domain-relevant stories to promote conceptual application, critical thinking, and ill-structured problem solving skills. Using a naturalistic, qualitative case…
Learning Qualitative Differential Equation models: a survey of algorithms and applications.
Pang, Wei; Coghill, George M
2010-03-01
Over the last two decades, qualitative reasoning (QR) has become an important domain in Artificial Intelligence. QDE (Qualitative Differential Equation) model learning (QML), as a branch of QR, has also received an increasing amount of attention; many systems have been proposed to solve various significant problems in this field. QML has been applied to a wide range of fields, including physics, biology and medical science. In this paper, we first identify the scope of this review by distinguishing QML from other QML systems, and then review all the noteworthy QML systems within this scope. The applications of QML in several application domains are also introduced briefly. Finally, the future directions of QML are explored from different perspectives.
Learning Qualitative Differential Equation models: a survey of algorithms and applications
PANG, WEI; COGHILL, GEORGE M.
2013-01-01
Over the last two decades, qualitative reasoning (QR) has become an important domain in Artificial Intelligence. QDE (Qualitative Differential Equation) model learning (QML), as a branch of QR, has also received an increasing amount of attention; many systems have been proposed to solve various significant problems in this field. QML has been applied to a wide range of fields, including physics, biology and medical science. In this paper, we first identify the scope of this review by distinguishing QML from other QML systems, and then review all the noteworthy QML systems within this scope. The applications of QML in several application domains are also introduced briefly. Finally, the future directions of QML are explored from different perspectives. PMID:23704803
A stable and accurate partitioned algorithm for conjugate heat transfer
NASA Astrophysics Data System (ADS)
Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
2017-09-01
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.
A stable and accurate partitioned algorithm for conjugate heat transfer
Meng, F.; Banks, J. W.; Henshaw, W. D.; ...
2017-04-25
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
High performance techniques for space mission scheduling
NASA Technical Reports Server (NTRS)
Smith, Stephen F.
1994-01-01
In this paper, we summarize current research at Carnegie Mellon University aimed at development of high performance techniques and tools for space mission scheduling. Similar to prior research in opportunistic scheduling, our approach assumes the use of dynamic analysis of problem constraints as a basis for heuristic focusing of problem solving search. This methodology, however, is grounded in representational assumptions more akin to those adopted in recent temporal planning research, and in a problem solving framework which similarly emphasizes constraint posting in an explicitly maintained solution constraint network. These more general representational assumptions are necessitated by the predominance of state-dependent constraints in space mission planning domains, and the consequent need to integrate resource allocation and plan synthesis processes. First, we review the space mission problems we have considered to date and indicate the results obtained in these application domains. Next, we summarize recent work in constraint posting scheduling procedures, which offer the promise of better future solutions to this class of problems.
A Unified Framework for Creating Domain Dependent Polarity Lexicons from User Generated Reviews
Asghar, Muhammad Zubair; Khan, Aurangzeb; Ahmad, Shakeel; Khan, Imran Ali; Kundi, Fazal Masud
2015-01-01
The exponential increase in the explosion of Web-based user generated reviews has resulted in the emergence of Opinion Mining (OM) applications for analyzing the users’ opinions toward products, services, and policies. The polarity lexicons often play a pivotal role in the OM, indicating the positivity and negativity of a term along with the numeric score. However, the commonly available domain independent lexicons are not an optimal choice for all of the domains within the OM applications. The aforementioned is due to the fact that the polarity of a term changes from one domain to other and such lexicons do not contain the correct polarity of a term for every domain. In this work, we focus on the problem of adapting a domain dependent polarity lexicon from set of labeled user reviews and domain independent lexicon to propose a unified learning framework based on the information theory concepts that can assign the terms with correct polarity (+ive, -ive) scores. The benchmarking on three datasets (car, hotel, and drug reviews) shows that our approach improves the performance of the polarity classification by achieving higher accuracy. Moreover, using the derived domain dependent lexicon changed the polarity of terms, and the experimental results show that our approach is more effective than the base line methods. PMID:26466101
A review of hybrid implicit explicit finite difference time domain method
NASA Astrophysics Data System (ADS)
Chen, Juan
2018-06-01
The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
A framework for discrete stochastic simulation on 3D moving boundary domains
Drawert, Brian; Hellander, Stefan; Trogdon, Michael; ...
2016-11-14
We have developed a method for modeling spatial stochastic biochemical reactions in complex, three-dimensional, and time-dependent domains using the reaction-diffusion master equation formalism. In particular, we look to address the fully coupled problems that arise in systems biology where the shape and mechanical properties of a cell are determined by the state of the biochemistry and vice versa. To validate our method and characterize the error involved, we compare our results for a carefully constructed test problem to those of a microscale implementation. Finally, we demonstrate the effectiveness of our method by simulating a model of polarization and shmoo formationmore » during the mating of yeast. The method is generally applicable to problems in systems biology where biochemistry and mechanics are coupled, and spatial stochastic effects are critical.« less
Jiang, Feng; Han, Ji-zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. PMID:29623088
Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)
2001-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
NASA Astrophysics Data System (ADS)
Chen, Zibin; Hong, Liang; Wang, Feifei; An, Xianghai; Wang, Xiaolin; Ringer, Simon; Chen, Long-Qing; Luo, Haosu; Liao, Xiaozhou
2017-12-01
Ferroelectric materials have been extensively explored for applications in high-density nonvolatile memory devices because of their ferroelectric-ferroelastic domain-switching behavior under electric loading or mechanical stress. However, the existence of ferroelectric and ferroelastic backswitching would cause significant data loss, which affects the reliability of data storage. Here, we apply in situ transmission electron microscopy and phase-field modeling to explore the unique ferroelastic domain-switching kinetics and the origin of this in relaxor-based Pb (Mg1 /3Nb2 /3)O3-33 % PbTiO3 single-crystal pillars under electrical and mechanical stimulations. Results showed that the electric-mechanical hysteresis loop shifted for relaxor-based single-crystal pillars because of the low energy levels of domains in the material and the constraint on the pillars, resulting in various mechanically reversible and irreversible domain-switching states. The phenomenon can potentially be used for advanced bit writing and reading in nonvolatile memories, which effectively overcomes the backswitching problem and broadens the types of ferroelectric materials for nonvolatile memory applications.
Daxini, S D; Prajapati, J M
2014-01-01
Meshfree methods are viewed as next generation computational techniques. With evident limitations of conventional grid based methods, like FEM, in dealing with problems of fracture mechanics, large deformation, and simulation of manufacturing processes, meshfree methods have gained much attention by researchers. A number of meshfree methods have been proposed till now for analyzing complex problems in various fields of engineering. Present work attempts to review recent developments and some earlier applications of well-known meshfree methods like EFG and MLPG to various types of structure mechanics and fracture mechanics applications like bending, buckling, free vibration analysis, sensitivity analysis and topology optimization, single and mixed mode crack problems, fatigue crack growth, and dynamic crack analysis and some typical applications like vibration of cracked structures, thermoelastic crack problems, and failure transition in impact problems. Due to complex nature of meshfree shape functions and evaluation of integrals in domain, meshless methods are computationally expensive as compared to conventional mesh based methods. Some improved versions of original meshfree methods and other techniques suggested by researchers to improve computational efficiency of meshfree methods are also reviewed here.
Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; ...
2015-01-26
We describe an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors are described. The details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstratingmore » the achieved efficiency of the algorithm are presented. Moreover, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.« less
Automated monitoring of medical protocols: a secure and distributed architecture.
Alsinet, T; Ansótegui, C; Béjar, R; Fernández, C; Manyà, F
2003-03-01
The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.
An Application of the Difference Potentials Method to Solving External Problems in CFD
NASA Technical Reports Server (NTRS)
Ryaben 'Kii, Victor S.; Tsynkov, Semyon V.
1997-01-01
Numerical solution of infinite-domain boundary-value problems requires some special techniques that would make the problem available for treatment on the computer. Indeed, the problem must be discretized in a way that the computer operates with only finite amount of information. Therefore, the original infinite-domain formulation must be altered and/or augmented so that on one hand the solution is not changed (or changed slightly) and on the other hand the finite discrete formulation becomes available. One widely used approach to constructing such discretizations consists of truncating the unbounded original domain and then setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The role of the ABC's is to close the truncated problem and at the same time to ensure that the solution found inside the finite computational domain would be maximally close to (in the ideal case, exactly the same as) the corresponding fragment of the original infinite-domain solution. Let us emphasize that the proper treatment of artificial boundaries may have a profound impact on the overall quality and performance of numerical algorithms. The latter statement is corroborated by the numerous computational experiments and especially concerns the area of CFD, in which external problems present a wide class of practically important formulations. In this paper, we review some work that has been done over the recent years on constructing highly accurate nonlocal ABC's for calculation of compressible external flows. The approach is based on implementation of the generalized potentials and pseudodifferential boundary projection operators analogous to those proposed first by Calderon. The difference potentials method (DPM) by Ryaben'kii is used for the effective computation of the generalized potentials and projections. The resulting ABC's clearly outperform the existing methods from the standpoints of accuracy and robustness, in many cases noticeably speed up the multigrid convergence, and at the same time are quite comparable to other methods from the standpoints of geometric universality and simplicity of implementation.
Machine learning-based coreference resolution of concepts in clinical documents
Ware, Henry; Mullett, Charles J; El-Rawas, Oussama
2012-01-01
Objective Coreference resolution of concepts, although a very active area in the natural language processing community, has not yet been widely applied to clinical documents. Accordingly, the 2011 i2b2 competition focusing on this area is a timely and useful challenge. The objective of this research was to collate coreferent chains of concepts from a corpus of clinical documents. These concepts are in the categories of person, problems, treatments, and tests. Design A machine learning approach based on graphical models was employed to cluster coreferent concepts. Features selected were divided into domain independent and domain specific sets. Training was done with the i2b2 provided training set of 489 documents with 6949 chains. Testing was done on 322 documents. Results The learning engine, using the un-weighted average of three different measurement schemes, resulted in an F measure of 0.8423 where no domain specific features were included and 0.8483 where the feature set included both domain independent and domain specific features. Conclusion Our machine learning approach is a promising solution for recognizing coreferent concepts, which in turn is useful for practical applications such as the assembly of problem and medication lists from clinical documents. PMID:22582205
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
NASA Astrophysics Data System (ADS)
Diamantopoulos, Theodore; Rowe, Kristopher; Diamessis, Peter
2017-11-01
The Collocation Penalty Method (CPM) solves a PDE on the interior of a domain, while weakly enforcing boundary conditions at domain edges via penalty terms, and naturally lends itself to high-order and multi-domain discretization. Such spectral multi-domain penalty methods (SMPM) have been used to solve the Navier-Stokes equations. Bounds for penalty coefficients are typically derived using the energy method to guarantee stability for time-dependent problems. The choice of collocation points and penalty parameter can greatly affect the conditioning and accuracy of a solution. Effort has been made in recent years to relate various high-order methods on multiple elements or domains under the umbrella of the Correction Procedure via Reconstruction (CPR). Most applications of CPR have focused on solving the compressible Navier-Stokes equations using explicit time-stepping procedures. A particularly important aspect which is still missing in the context of the SMPM is a study of the Helmholtz equation arising in many popular time-splitting schemes for the incompressible Navier-Stokes equations. Stability and convergence results for the SMPM for the Helmholtz equation will be presented. Emphasis will be placed on the efficiency and accuracy of high-order methods.
DICOMweb™: Background and Application of the Web Standard for Medical Imaging.
Genereaux, Brad W; Dennison, Donald K; Ho, Kinson; Horn, Robert; Silver, Elliot Lewis; O'Donnell, Kevin; Kahn, Charles E
2018-05-10
This paper describes why and how DICOM, the standard that has been the basis for medical imaging interoperability around the world for several decades, has been extended into a full web technology-based standard, DICOMweb. At the turn of the century, healthcare embraced information technology, which created new problems and new opportunities for the medical imaging industry; at the same time, web technologies matured and began serving other domains well. This paper describes DICOMweb, how it extended the DICOM standard, and how DICOMweb can be applied to problems facing healthcare applications to address workflow and the changing healthcare climate.
Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment.
Prevost, Luanna B; Lemons, Paula P
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. © 2016 L. B. Prevost and P. P. Lemons. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Conditional random fields for pattern recognition applied to structured data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Skurikhin, Alexei
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
Conditional random fields for pattern recognition applied to structured data
Burr, Tom; Skurikhin, Alexei
2015-07-14
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
Parallelization of PANDA discrete ordinates code using spatial decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.
2006-07-01
We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less
A conceptual holding model for veterinary applications.
Ferrè, Nicola; Kuhn, Werner; Rumor, Massimo; Marangon, Stefano
2014-05-01
Spatial references are required when geographical information systems (GIS) are used for the collection, storage and management of data. In the veterinary domain, the spatial component of a holding (of animals) is usually defined by coordinates, and no other relevant information needs to be interpreted or used for manipulation of the data in the GIS environment provided. Users trying to integrate or reuse spatial data organised in such a way, frequently face the problem of data incompatibility and inconsistency. The root of the problem lies in differences with respect to syntax as well as variations in the semantic, spatial and temporal representations of the geographic features. To overcome these problems and to facilitate the inter-operability of different GIS, spatial data must be defined according to a \\"schema\\" that includes the definition, acquisition, analysis, access, presentation and transfer of such data between different users and systems. We propose an application \\"schema\\" of holdings for GIS applications in the veterinary domain according to the European directive framework (directive 2007/2/EC--INSPIRE). The conceptual model put forward has been developed at two specific levels to produce the essential and the abstract model, respectively. The former establishes the conceptual linkage of the system design to the real world, while the latter describes how the system or software works. The result is an application \\"schema\\" that formalises and unifies the information-theoretic foundations of how to spatially represent a holding in order to ensure straightforward information-sharing within the veterinary community.
Proceedings of the Workshop on Change of Representation and Problem Reformulation
NASA Technical Reports Server (NTRS)
Lowry, Michael R.
1992-01-01
The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning.
A Survey on Anomaly Based Host Intrusion Detection System
NASA Astrophysics Data System (ADS)
Jose, Shijoe; Malathi, D.; Reddy, Bharath; Jayaseeli, Dorathi
2018-04-01
An intrusion detection system (IDS) is hardware, software or a combination of two, for monitoring network or system activities to detect malicious signs. In computer security, designing a robust intrusion detection system is one of the most fundamental and important problems. The primary function of system is detecting intrusion and gives alerts when user tries to intrusion on timely manner. In these techniques when IDS find out intrusion it will send alert massage to the system administrator. Anomaly detection is an important problem that has been researched within diverse research areas and application domains. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. From the existing anomaly detection techniques, each technique has relative strengths and weaknesses. The current state of the experiment practice in the field of anomaly-based intrusion detection is reviewed and survey recent studies in this. This survey provides a study of existing anomaly detection techniques, and how the techniques used in one area can be applied in another application domain.
Automatic programming of simulation models
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.
1990-01-01
The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.
The unified acoustic and aerodynamic prediction theory of advanced propellers in the time domain
NASA Technical Reports Server (NTRS)
Farassat, F.
1984-01-01
This paper presents some numerical results for the noise of an advanced supersonic propeller based on a formulation published last year. This formulation was derived to overcome some of the practical numerical difficulties associated with other acoustic formulations. The approach is based on the Ffowcs Williams-Hawkings equation and time domain analysis is used. To illustrate the method of solution, a model problem in three dimensions and based on the Laplace equation is solved. A brief sketch of derivation of the acoustic formula is then given. Another model problem is used to verify validity of the acoustic formulation. A recent singular integral equation for aerodynamic applications derived from the acoustic formula is also presented here.
NASA Technical Reports Server (NTRS)
Sharma, Naveen
1992-01-01
In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.
Trans-dimensional Bayesian inversion of airborne electromagnetic data for 2D conductivity profiles
NASA Astrophysics Data System (ADS)
Hawkins, Rhys; Brodie, Ross C.; Sambridge, Malcolm
2018-02-01
This paper presents the application of a novel trans-dimensional sampling approach to a time domain airborne electromagnetic (AEM) inverse problem to solve for plausible conductivities of the subsurface. Geophysical inverse field problems, such as time domain AEM, are well known to have a large degree of non-uniqueness. Common least-squares optimisation approaches fail to take this into account and provide a single solution with linearised estimates of uncertainty that can result in overly optimistic appraisal of the conductivity of the subsurface. In this new non-linear approach, the spatial complexity of a 2D profile is controlled directly by the data. By examining an ensemble of proposed conductivity profiles it accommodates non-uniqueness and provides more robust estimates of uncertainties.
Variational formulation of hybrid problems for fully 3-D transonic flow with shocks in rotor
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Based on previous research, the unified variable domain variational theory of hybrid problems for rotor flow is extended to fully 3-D transonic rotor flow with shocks, unifying and generalizing the direct and inverse problems. Three variational principles (VP) families were established. All unknown boundaries and flow discontinuities (such as shocks, free trailing vortex sheets) are successfully handled via functional variations with variable domain, converting almost all boundary and interface conditions, including the Rankine Hugoniot shock relations, into natural ones. This theory provides a series of novel ways for blade design or modification and a rigorous theoretical basis for finite element applications and also constitutes an important part of the optimal design theory of rotor bladings. Numerical solutions to subsonic flow by finite elements with self-adapting nodes given in Refs., show good agreement with experimental results.
NASA Astrophysics Data System (ADS)
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
NASA Astrophysics Data System (ADS)
Liu, J.; Lan, T.; Qin, H.
2017-10-01
Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.
Time-domain full waveform inversion using instantaneous phase information with damping
NASA Astrophysics Data System (ADS)
Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun
2018-06-01
In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.
NASA Technical Reports Server (NTRS)
Bless, Robert R.
1991-01-01
A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.
Research on the application in disaster reduction for using cloud computing technology
NASA Astrophysics Data System (ADS)
Tao, Liang; Fan, Yida; Wang, Xingling
Cloud Computing technology has been rapidly applied in different domains recently, promotes the progress of the domain's informatization. Based on the analysis of the state of application requirement in disaster reduction and combining the characteristics of Cloud Computing technology, we present the research on the application of Cloud Computing technology in disaster reduction. First of all, we give the architecture of disaster reduction cloud, which consists of disaster reduction infrastructure as a service (IAAS), disaster reduction cloud application platform as a service (PAAS) and disaster reduction software as a service (SAAS). Secondly, we talk about the standard system of disaster reduction in five aspects. Thirdly, we indicate the security system of disaster reduction cloud. Finally, we draw a conclusion the use of cloud computing technology will help us to solve the problems for disaster reduction and promote the development of disaster reduction.
Finite element modeling of electromagnetic fields and waves using NASTRAN
NASA Technical Reports Server (NTRS)
Moyer, E. Thomas, Jr.; Schroeder, Erwin
1989-01-01
The various formulations of Maxwell's equations are reviewed with emphasis on those formulations which most readily form analogies with Navier's equations. Analogies involving scalar and vector potentials and electric and magnetic field components are presented. Formulations allowing for media with dielectric and conducting properties are emphasized. It is demonstrated that many problems in electromagnetism can be solved using the NASTRAN finite element code. Several fundamental problems involving time harmonic solutions of Maxwell's equations with known analytic solutions are solved using NASTRAN to demonstrate convergence and mesh requirements. Mesh requirements are studied as a function of frequency, conductivity, and dielectric properties. Applications in both low frequency and high frequency are highlighted. The low frequency problems demonstrate the ability to solve problems involving media inhomogeneity and unbounded domains. The high frequency applications demonstrate the ability to handle problems with large boundary to wavelength ratios.
NASA Astrophysics Data System (ADS)
Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.; Polivanov, M. C.
1992-11-01
The resolvent operator of the linear problem is determined as the full Green function continued in the complex domain in two variables. An analog of the known Hilbert identity is derived. We demonstrate the role of this identity in the study of two-dimensional scattering. Considering the nonstationary Schrödinger equation as an example, we show that all types of solutions of the linear problems, as well as spectral data known in the literature, are given as specific values of this unique function — the resolvent function. A new form of the inverse problem is formulated.
Solutions for Dynamic Variants of Eshelby's Inclusion Problem
NASA Astrophysics Data System (ADS)
Michelitsch, Thomas M.; Askes, Harm; Wang, Jizeng; Levin, Valery M.
The dynamic variant of Eshelby's inclusion problem plays a crucial role in many areas of mechanics and theoretical physics. Because of its mathematical complexity, dynamic variants of the inclusion problems so far are only little touched. In this paper we derive solutions for dynamic variants of the Eshelby inclusion problem for arbitrary scalar source densities of the eigenstrain. We study a series of examples of Eshelby problems which are of interest for applications in materials sciences, such as for instance cubic and prismatic inclusions. The method which covers both the time and frequency domain is especially useful for dynamically transforming inclusions of any shape.
Application of higher-order cepstral techniques in problems of fetal heart signal extraction
NASA Astrophysics Data System (ADS)
Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.
1996-10-01
Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.
Specification, Design, and Analysis of Advanced HUMS Architectures
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
2004-01-01
During the two-year project period, we have worked on several aspects of domain-specific architectures for HUMS. In particular, we looked at using scenario-based approach for the design and designed a language for describing such architectures. The language is now being used in all aspects of our HUMS design. In particular, we have made contributions in the following areas. 1) We have employed scenarios in the development of HUMS in three main areas. They are: (a) To improve reusability by using scenarios as a library indexing tool and as a domain analysis tool; (b) To improve maintainability by recording design rationales from two perspectives - problem domain and solution domain; (c) To evaluate the software architecture. 2) We have defined a new architectural language called HADL or HUMS Architectural Definition Language. It is a customized version of xArch/xADL. It is based on XML and, hence, is easily portable from domain to domain, application to application, and machine to machine. Specifications written in HADL can be easily read and parsed using the currently available XML parsers. Thus, there is no need to develop a plethora of software to support HADL. 3) We have developed an automated design process that involves two main techniques: (a) Selection of solutions from a large space of designs; (b) Synthesis of designs. However, the automation process is not an absolute Artificial Intelligence (AI) approach though it uses a knowledge-based system that epitomizes a specific HUMS domain. The process uses a database of solutions as an aid to solve the problems rather than creating a new design in the literal sense. Since searching is adopted as the main technique, the challenges involved are: (a) To minimize the effort in searching the database where a very large number of possibilities exist; (b) To develop representations that could conveniently allow us to depict design knowledge evolved over many years; (c) To capture the required information that aid the automation process.
Learning Potential and Cognitive Modifiability
ERIC Educational Resources Information Center
Kozulin, Alex
2011-01-01
The relationship between thinking and learning constitutes one of the fundamental problems of cognitive psychology. Though there is an obvious overlap between the domains of thinking and learning, it seems more productive to consider learning as being predominantly acquisition while considering thinking as the application of the existent concepts…
Scientific Culture and School Culture: Epistemic and Procedural Components.
ERIC Educational Resources Information Center
Jimenez-Aleixandre, Maria Pilar; Diaz de Bustamante, Joaquin; Duschl, Richard A.
This paper discusses the elaboration and application of "scientific culture" categories to the analysis of students' discourse while solving problems in inquiry contexts. Scientific culture means the particular domain culture of science, the culture of science practitioners. The categories proposed include both epistemic operations and…
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
Computational intelligence in earth sciences and environmental applications: issues and challenges.
Cherkassky, V; Krasnopolsky, V; Solomatine, D P; Valdes, J
2006-03-01
This paper introduces a generic theoretical framework for predictive learning, and relates it to data-driven and learning applications in earth and environmental sciences. The issues of data quality, selection of the error function, incorporation of the predictive learning methods into the existing modeling frameworks, expert knowledge, model uncertainty, and other application-domain specific problems are discussed. A brief overview of the papers in the Special Issue is provided, followed by discussion of open issues and directions for future research.
NASA Astrophysics Data System (ADS)
Ortleb, Sigrun; Seidel, Christian
2017-07-01
In this second symposium at the limits of experimental and numerical methods, recent research is presented on practically relevant problems. Presentations discuss experimental investigation as well as numerical methods with a strong focus on application. In addition, problems are identified which require a hybrid experimental-numerical approach. Topics include fast explicit diffusion applied to a geothermal energy storage tank, noise in experimental measurements of electrical quantities, thermal fluid structure interaction, tensegrity structures, experimental and numerical methods for Chladni figures, optimized construction of hydroelectric power stations, experimental and numerical limits in the investigation of rain-wind induced vibrations as well as the application of exponential integrators in a domain-based IMEX setting.
On the inflation of poro-hyperelastic annuli
NASA Astrophysics Data System (ADS)
Selvadurai, A. P. S.; Suvorov, A. P.
2017-10-01
The paper presents the radially and spherically symmetric problems associated with the inflation of poro-hyperelastic regions. The theory of poro-hyperelasticity is a convenient framework for modelling the mechanical behaviour of highly deformable materials in which the pore space is saturated with fluids. Including the coupled mechanical responses of both the hyperelastic porous skeleton and the fluid is regarded as an important consideration for the application of the results, particularly to soft tissues encountered in biomechanical applications. The analytical solutions for radially and spherically symmetric problems involving annular domains are used to benchmark the accuracy of a standard computational approach. The paper also generates results applicable to the hyperelastic solutions when coupling is eliminated through the presence of a highly permeable pore structure.
NASA Astrophysics Data System (ADS)
Cerpa, Nestor; Hassani, Riad; Gerbault, Muriel
2014-05-01
A large variety of geodynamical problems can be viewed as a solid/fluid interaction problem coupling two bodies with different physics. In particular the lithosphere/asthenosphere mechanical interaction in subduction zones belongs to this kind of problem, where the solid lithosphere is embedded in the asthenospheric viscous fluid. In many fields (Industry, Civil Engineering,etc.), in which deformations of solid and fluid are "small", numerical modelers consider the exact discretization of both domains and fit as well as possible the shape of the interface between the two domains, solving the discretized physic problems by the Finite Element Method (FEM). Although, in a context of subduction, the lithosphere is submitted to large deformation, and can evolve into a complex geometry, thus leading to important deformation of the surrounding asthenosphere. To alleviate the precise meshing of complex geometries, numerical modelers have developed non-matching interface methods called Fictitious Domain Methods (FDM). The main idea of these methods is to extend the initial problem to a bigger (and simpler) domain. In our version of FDM, we determine the forces at the immersed solid boundary required to minimize (at the least square sense) the difference between fluid and solid velocities at this interface. This method is first-order accurate and the stability depends on the ratio between the fluid background mesh size and the interface discretization. We present the formulation and provide benchmarks and examples showing the potential of the method : 1) A comparison with an analytical solution of a viscous flow around a rigid body. 2) An experiment of a rigid sphere sinking in a viscous fluid (in two and three dimensional cases). 3) A comparison with an analog subduction experiment. Another presentation aims at describing the geodynamical application of this method to Andean subduction dynamics, studying cyclic slab folding on the 660 km discontinuity, and its relationship with flat subduction.
NASA Technical Reports Server (NTRS)
Antar, B. N.
1976-01-01
A numerical technique is presented for locating the eigenvalues of two point linear differential eigenvalue problems. The technique is designed to search for complex eigenvalues belonging to complex operators. With this method, any domain of the complex eigenvalue plane could be scanned and the eigenvalues within it, if any, located. For an application of the method, the eigenvalues of the Orr-Sommerfeld equation of the plane Poiseuille flow are determined within a specified portion of the c-plane. The eigenvalues for alpha = 1 and R = 10,000 are tabulated and compared for accuracy with existing solutions.
NASA Astrophysics Data System (ADS)
Aji Hapsoro, Cahyo; Purqon, Acep; Srigutomo, Wahyu
2017-07-01
2-D Time Domain Electromagnetic (TDEM) has been successfully conducted to illustrate the value of Electric field distribution under the Earth surface. Electric field compared by magnetic field is used to analyze resistivity and resistivity is one of physical properties which very important to determine the reservoir potential area of geothermal systems as one of renewable energy. In this modeling we used Time Domain Electromagnetic method because it can solve EM field interaction problem with complex geometry and to analyze transient problems. TDEM methods used to model the value of electric and magnetic fields as a function of the time combined with the function of distance and depth. The result of this modeling is Electric field intensity value which is capable to describe the structure of the Earth’s subsurface. The result of this modeling can be applied to describe the Earths subsurface resistivity values to determine the reservoir potential of geothermal systems.
Geometric Models for Collaborative Search and Filtering
ERIC Educational Resources Information Center
Bitton, Ephrat
2011-01-01
This dissertation explores the use of geometric and graphical models for a variety of information search and filtering applications. These models serve to provide an intuitive understanding of the problem domains and as well as computational efficiencies to our solution approaches. We begin by considering a search and rescue scenario where both…
Automated Network Anomaly Detection with Learning, Control and Mitigation
ERIC Educational Resources Information Center
Ippoliti, Dennis
2014-01-01
Anomaly detection is a challenging problem that has been researched within a variety of application domains. In network intrusion detection, anomaly based techniques are particularly attractive because of their ability to identify previously unknown attacks without the need to be programmed with the specific signatures of every possible attack.…
Projectile Motion without Calculus
ERIC Educational Resources Information Center
Rizcallah, Joseph A.
2018-01-01
Projectile motion is a constant theme in introductory-physics courses. It is often used to illustrate the application of differential and integral calculus. While most of the problems used for this purpose, such as maximizing the range, are kept at a fairly elementary level, some, such as determining the safe domain, involve not so elementary…
BiOSS: A system for biomedical ontology selection.
Martínez-Romero, Marcos; Vázquez-Naya, José M; Pereira, Javier; Pazos, Alejandro
2014-04-01
In biomedical informatics, ontologies are considered a key technology for annotating, retrieving and sharing the huge volume of publicly available data. Due to the increasing amount, complexity and variety of existing biomedical ontologies, choosing the ones to be used in a semantic annotation problem or to design a specific application is a difficult task. As a consequence, the design of approaches and tools addressed to facilitate the selection of biomedical ontologies is becoming a priority. In this paper we present BiOSS, a novel system for the selection of biomedical ontologies. BiOSS evaluates the adequacy of an ontology to a given domain according to three different criteria: (1) the extent to which the ontology covers the domain; (2) the semantic richness of the ontology in the domain; (3) the popularity of the ontology in the biomedical community. BiOSS has been applied to 5 representative problems of ontology selection. It also has been compared to existing methods and tools. Results are promising and show the usefulness of BiOSS to solve real-world ontology selection problems. BiOSS is openly available both as a web tool and a web service. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
DOORS to the semantic web and grid with a PORTAL for biomedical computing.
Taswell, Carl
2008-03-01
The semantic web remains in the early stages of development. It has not yet achieved the goals envisioned by its founders as a pervasive web of distributed knowledge and intelligence. Success will be attained when a dynamic synergism can be created between people and a sufficient number of infrastructure systems and tools for the semantic web in analogy with those for the original web. The domain name system (DNS), web browsers, and the benefits of publishing web pages motivated many people to register domain names and publish web sites on the original web. An analogous resource label system, semantic search applications, and the benefits of collaborative semantic networks will motivate people to register resource labels and publish resource descriptions on the semantic web. The Domain Ontology Oriented Resource System (DOORS) and Problem Oriented Registry of Tags and Labels (PORTAL) are proposed as infrastructure systems for resource metadata within a paradigm that can serve as a bridge between the original web and the semantic web. The Internet Registry Information Service (IRIS) registers [corrected] domain names while DNS publishes domain addresses with mapping of names to addresses for the original web. Analogously, PORTAL registers resource labels and tags while DOORS publishes resource locations and descriptions with mapping of labels to locations for the semantic web. BioPORT is proposed as a prototype PORTAL registry specific for the problem domain of biomedical computing.
NASA Astrophysics Data System (ADS)
Tanaka, Yoshiyuki; Klemann, Volker; Okuno, Jun'ichi
2009-09-01
Normal mode approaches for calculating viscoelastic responses of self-gravitating and compressible spherical earth models have an intrinsic problem of determining the roots of the secular equation and the associated residues in the Laplace domain. To bypass this problem, a method based on numerical inverse Laplace integration was developed by T anaka et al. (2006, 2007) for computations of viscoelastic deformation caused by an internal dislocation. The advantage of this approach is that the root-finding problem is avoided without imposing additional constraints on the governing equations and earth models. In this study, we apply the same algorithm to computations of viscoelastic responses to a surface load and show that the results obtained by this approach agree well with those obtained by a time-domain approach that does not need determinations of the normal modes in the Laplace domain. Using the elastic earth model PREM and a convex viscosity profile, we calculate viscoelastic load Love numbers ( h, l, k) for compressible and incompressible models. Comparisons between the results show that effects due to compressibility are consistent with results obtained by previous studies and that the rate differences between the two models total 10-40%. This will serve as an independent method to confirm results obtained by time-domain approaches and will usefully increase the reliability when modeling postglacial rebound.
The Role of Ontologies in Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Bures, Tomas; Denney, Ewen; Fischer, Bernd; Nistor, Eugen C.
2004-01-01
Program synthesis is the process of automatically deriving executable code from (non-executable) high-level specifications. It is more flexible and powerful than conventional code generation techniques that simply translate algorithmic specifications into lower-level code or only create code skeletons from structural specifications (such as UML class diagrams). Key to building a successful synthesis system is specializing to an appropriate application domain. The AUTOBAYES and AUTOFILTER systems, under development at NASA Ames, operate in the two domains of data analysis and state estimation, respectively. The central concept of both systems is the schema, a representation of reusable computational knowledge. This can take various forms, including high-level algorithm templates, code optimizations, datatype refinements, or architectural information. A schema also contains applicability conditions that are used to determine when it can be applied safely. These conditions can refer to the initial specification, to intermediate results, or to elements of the partially-instantiated code. Schema-based synthesis uses AI technology to recursively apply schemas to gradually refine a specification into executable code. This process proceeds in two main phases. A front-end gradually transforms the problem specification into a program represented in an abstract intermediate code. A backend then compiles this further down into a concrete target programming language of choice. A core engine applies schemas on the initial problem specification, then uses the output of those schemas as the input for other schemas, until the full implementation is generated. Since there might be different schemas that implement different solutions to the same problem this process can generate an entire solution tree. AUTOBAYES and AUTOFILTER have reached the level of maturity where they enable users to solve interesting application problems, e.g., the analysis of Hubble Space Telescope images. They are large (in total around 100kLoC Prolog), knowledge intensive systems that employ complex symbolic reasoning to generate a wide range of non-trivial programs for complex application do- mains. Their schemas can have complex interactions, which make it hard to change them in isolation or even understand what an existing schema actually does. Adding more capabilities by increasing the number of schemas will only worsen this situation, ultimately leading to the entropy death of the synthesis system. The root came of this problem is that the domain knowledge is scattered throughout the entire system and only represented implicitly in the schema implementations. In our current work, we are addressing this problem by making explicit the knowledge from Merent parts of the synthesis system. Here; we discuss how Gruber's definition of an ontology as an explicit specification of a conceptualization matches our efforts in identifying and explicating the domain-specific concepts. We outline the dual role ontologies play in schema-based synthesis and argue that they address different audiences and serve different purposes. Their first role is descriptive: they serve as explicit documentation, and help to understand the internal structure of the system. Their second role is prescriptive: they provide the formal basis against which the other parts of the system (e.g., schemas) can be checked. Their final role is referential: ontologies also provide semantically meaningful "hooks" which allow schemas and tools to access the internal state of the program derivation process (e.g., fragments of the generated code) in domain-specific rather than language-specific terms, and thus to modify it in a controlled fashion. For discussion purposes we use AUTOLINEAR, a small synthesis system we are currently experimenting with, which can generate code for solving a system of linear equations, Az = b.
A PC based time domain reflectometer for space station cable fault isolation
NASA Technical Reports Server (NTRS)
Pham, Michael; McClean, Marty; Hossain, Sabbir; Vo, Peter; Kouns, Ken
1994-01-01
Significant problems are faced by astronauts on orbit in the Space Station when trying to locate electrical faults in multi-segment avionics and communication cables. These problems necessitate the development of an automated portable device that will detect and locate cable faults using the pulse-echo technique known as Time Domain Reflectometry. A breadboard time domain reflectometer (TDR) circuit board was designed and developed at the NASA-JSC. The TDR board works in conjunction with a GRiD lap-top computer to automate the fault detection and isolation process. A software program was written to automatically display the nature and location of any possible faults. The breadboard system can isolate open circuit and short circuit faults within two feet in a typical space station cable configuration. Follow-on efforts planned for 1994 will produce a compact, portable prototype Space Station TDR capable of automated switching in multi-conductor cables for high fidelity evaluation. This device has many possible commercial applications, including commercial and military aircraft avionics, cable TV, telephone, communication, information and computer network systems. This paper describes the principle of time domain reflectometry and the methodology for on-orbit avionics utility distribution system repair, utilizing the newly developed device called the Space Station Time Domain Reflectometer (SSTDR).
NASA Astrophysics Data System (ADS)
Tran, T.
With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.
The application of domain-driven design in NMS
NASA Astrophysics Data System (ADS)
Zhang, Jinsong; Chen, Yan; Qin, Shengjun
2011-12-01
In the traditional design approach of data-model-driven, system analysis and design phases are often separated which makes the demand information can not be expressed explicitly. The method is also easy to lead developer to the process-oriented programming, making codes between the modules or between hierarchies disordered. So it is hard to meet requirement of system scalability. The paper proposes a software hiberarchy based on rich domain model according to domain-driven design named FHRDM, then the Webwork + Spring + Hibernate (WSH) framework is determined. Domain-driven design aims to construct a domain model which not only meets the demand of the field where the software exists but also meets the need of software development. In this way, problems in Navigational Maritime System (NMS) development like big system business volumes, difficulty of requirement elicitation, high development costs and long development cycle can be resolved successfully.
Imaging of surface spin textures on bulk crystals by scanning electron microscopy
NASA Astrophysics Data System (ADS)
Akamine, Hiroshi; Okumura, So; Farjami, Sahar; Murakami, Yasukazu; Nishida, Minoru
2016-11-01
Direct observation of magnetic microstructures is vital for advancing spintronics and other technologies. Here we report a method for imaging surface domain structures on bulk samples by scanning electron microscopy (SEM). Complex magnetic domains, referred to as the maze state in CoPt/FePt alloys, were observed at a spatial resolution of less than 100 nm by using an in-lens annular detector. The method allows for imaging almost all the domain walls in the mazy structure, whereas the visualisation of the domain walls with the classical SEM method was limited. Our method provides a simple way to analyse surface domain structures in the bulk state that can be used in combination with SEM functions such as orientation or composition analysis. Thus, the method extends applications of SEM-based magnetic imaging, and is promising for resolving various problems at the forefront of fields including physics, magnetics, materials science, engineering, and chemistry.
2006-08-01
Nikolas Avouris. Evaluation of classifiers for an uneven class distribution problem. Applied Artificial Intellegence , pages 1-24, 2006. Draft manuscript...data by a hybrid artificial neural network so we may evaluate the classification capabilities of the baseline GRLVQ and our improved GRLVQI. Chapter 4...performance of GRLVQ(I), we compare the results against a baseline classification of the 23-class problem with a hybrid artificial neural network (ANN
An Abstract Plan Preparation Language
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Munoz, Cesar A.
2006-01-01
This paper presents a new planning language that is more abstract than most existing planning languages such as the Planning Domain Definition Language (PDDL) or the New Domain Description Language (NDDL). The goal of this language is to simplify the formal analysis and specification of planning problems that are intended for safety-critical applications such as power management or automated rendezvous in future manned spacecraft. The new language has been named the Abstract Plan Preparation Language (APPL). A translator from APPL to NDDL has been developed in support of the Spacecraft Autonomy for Vehicles and Habitats Project (SAVH) sponsored by the Explorations Technology Development Program, which is seeking to mature autonomy technology for application to the new Crew Exploration Vehicle (CEV) that will replace the Space Shuttle.
High-Order Methods for Incompressible Fluid Flow
NASA Astrophysics Data System (ADS)
Deville, M. O.; Fischer, P. F.; Mund, E. H.
2002-08-01
High-order numerical methods provide an efficient approach to simulating many physical problems. This book considers the range of mathematical, engineering, and computer science topics that form the foundation of high-order numerical methods for the simulation of incompressible fluid flows in complex domains. Introductory chapters present high-order spatial and temporal discretizations for one-dimensional problems. These are extended to multiple space dimensions with a detailed discussion of tensor-product forms, multi-domain methods, and preconditioners for iterative solution techniques. Numerous discretizations of the steady and unsteady Stokes and Navier-Stokes equations are presented, with particular sttention given to enforcement of imcompressibility. Advanced discretizations. implementation issues, and parallel and vector performance are considered in the closing sections. Numerous examples are provided throughout to illustrate the capabilities of high-order methods in actual applications.
Pole-zero form fractional model identification in frequency domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansouri, R.; Djamah, T.; Djennoune, S.
2009-03-05
This paper deals with system identification in the frequency domain using non integer order models given in the pole-zero form. The usual identification techniques cannot be used in this case because of the non integer orders of differentiation which makes the problem strongly nonlinear. A general identification method based on Levenberg-Marquardt algorithm is developed and allows to estimate the (2n+2m+1) parameters of the model. Its application to identify the ''skin effect'' of a squirrel cage induction machine modeling is then presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Kausik, E-mail: kausik.chatterjee@aggiemail.usu.edu; Center for Atmospheric and Space Sciences, Utah State University, Logan, UT 84322; Roadcap, John R., E-mail: john.roadcap@us.af.mil
The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals ofmore » the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.« less
NASA Astrophysics Data System (ADS)
Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra
2014-11-01
The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.
Application of the perturbation iteration method to boundary layer type problems.
Pakdemirli, Mehmet
2016-01-01
The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.
Lower Sensitivity to Happy and Angry Facial Emotions in Young Adults with Psychiatric Problems
Vrijen, Charlotte; Hartman, Catharina A.; Lodder, Gerine M. A.; Verhagen, Maaike; de Jonge, Peter; Oldehinkel, Albertine J.
2016-01-01
Many psychiatric problem domains have been associated with emotion-specific biases or general deficiencies in facial emotion identification. However, both within and between psychiatric problem domains, large variability exists in the types of emotion identification problems that were reported. Moreover, since the domain-specificity of the findings was often not addressed, it remains unclear whether patterns found for specific problem domains can be better explained by co-occurrence of other psychiatric problems or by more generic characteristics of psychopathology, for example, problem severity. In this study, we aimed to investigate associations between emotion identification biases and five psychiatric problem domains, and to determine the domain-specificity of these biases. Data were collected as part of the ‘No Fun No Glory’ study and involved 2,577 young adults. The study participants completed a dynamic facial emotion identification task involving happy, sad, angry, and fearful faces, and filled in the Adult Self-Report Questionnaire, of which we used the scales depressive problems, anxiety problems, avoidance problems, Attention-Deficit Hyperactivity Disorder (ADHD) problems and antisocial problems. Our results suggest that participants with antisocial problems were significantly less sensitive to happy facial emotions, participants with ADHD problems were less sensitive to angry emotions, and participants with avoidance problems were less sensitive to both angry and happy emotions. These effects could not be fully explained by co-occurring psychiatric problems. Whereas this seems to indicate domain-specificity, inspection of the overall pattern of effect sizes regardless of statistical significance reveals generic patterns as well, in that for all psychiatric problem domains the effect sizes for happy and angry emotions were larger than the effect sizes for sad and fearful emotions. As happy and angry emotions are strongly associated with approach and avoidance mechanisms in social interaction, these mechanisms may hold the key to understanding the associations between facial emotion identification and a wide range of psychiatric problems. PMID:27920735
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
Possibilities of the particle finite element method for fluid-soil-structure interaction problems
NASA Astrophysics Data System (ADS)
Oñate, Eugenio; Celigueta, Miguel Angel; Idelsohn, Sergio R.; Salazar, Fernando; Suárez, Benjamín
2011-09-01
We present some developments in the particle finite element method (PFEM) for analysis of complex coupled problems in mechanics involving fluid-soil-structure interaction (FSSI). The PFEM uses an updated Lagrangian description to model the motion of nodes (particles) in both the fluid and the solid domains (the later including soil/rock and structures). A mesh connects the particles (nodes) defining the discretized domain where the governing equations for each of the constituent materials are solved as in the standard FEM. The stabilization for dealing with an incompressibility continuum is introduced via the finite calculus method. An incremental iterative scheme for the solution of the non linear transient coupled FSSI problem is described. The procedure to model frictional contact conditions and material erosion at fluid-solid and solid-solid interfaces is described. We present several examples of application of the PFEM to solve FSSI problems such as the motion of rocks by water streams, the erosion of a river bed adjacent to a bridge foundation, the stability of breakwaters and constructions sea waves and the study of landslides.
NASA Technical Reports Server (NTRS)
Richards, Stephen F.
1991-01-01
Although computerized operations have significant gains realized in many areas, one area, scheduling, has enjoyed few benefits from automation. The traditional methods of industrial engineering and operations research have not proven robust enough to handle the complexities associated with the scheduling of realistic problems. To address this need, NASA has developed the computer-aided scheduling system (COMPASS), a sophisticated, interactive scheduling tool that is in wide-spread use within NASA and the contractor community. Therefore, COMPASS provides no explicit support for the large class of problems in which several people, perhaps at various locations, build separate schedules that share a common pool of resources. This research examines the issue of distributing scheduling, as applied to application domains characterized by the partial ordering of tasks, limited resources, and time restrictions. The focus of this research is on identifying issues related to distributed scheduling, locating applicable problem domains within NASA, and suggesting areas for ongoing research. The issues that this research identifies are goals, rescheduling requirements, database support, the need for communication and coordination among individual schedulers, the potential for expert system support for scheduling, and the possibility of integrating artificially intelligent schedulers into a network of human schedulers.
Computer-Based Information System Cultivated To Support a College of Education.
ERIC Educational Resources Information Center
Smith, Gary R.
This brief paper discusses four of the computer applications explored at Wayne State University over the past decade to provide alternative solutions to problems commonly encountered in teacher education and in providing support for the classroom teacher. These studies examined only databases that are available in the public domain; obtained…
ERIC Educational Resources Information Center
FATTU, N. A.
A SERIES OF STUDIES WAS UNDERTAKEN, DIRECTED TOWARD EXPLORATIONS OF INTERRELATIONSHIPS AMONG MEDIA, PROCESSES (INSTRUCTIONAL PREREQUISITES), CONTENT AND APTITUDE VARIABLES, AND ACHIEVEMENTS. EMPHASIS THROUGHOUT WAS ON (1) THE COGNITIVE DOMAIN AND (2) PROBLEMS INVOLVED IN APPLICATION OF KNOWLEDGE TO A PRACTICAL TEACHING SITUATION. "THE…
Teaching Case: Adapting the Access Northwind Database to Support a Database Course
ERIC Educational Resources Information Center
Dyer, John N.; Rogers, Camille
2015-01-01
A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…
Expert system prototype developments for NASA-KSC business and engineering applications
NASA Technical Reports Server (NTRS)
Ragusa, James M.; Gonzalez, Avelino J.
1988-01-01
Prototype expert systems developed for a variety of NASA projects in the business/management and engineering domains are discussed. Business-related problems addressed include an assistant for simulating launch vehicle processing, a plan advisor for the acquisition of automated data processing equipment, and an expert system for the identification of customer requirements. Engineering problems treated include an expert system for detecting potential ignition sources in LOX and gaseous-oxygen transportation systems and an expert system for hazardous-gas detection.
NASA Astrophysics Data System (ADS)
Mutabaruka, Patrick; Kamrin, Ken
2018-04-01
A numerical method for particle-laden fluids interacting with a deformable solid domain and mobile rigid parts is proposed and implemented in a full engineering system. The fluid domain is modeled with a lattice Boltzmann representation, the particles and rigid parts are modeled with a discrete element representation, and the deformable solid domain is modeled using a Lagrangian mesh. The main issue of this work, since separately each of these methods is a mature tool, is to develop coupling and model-reduction approaches in order to efficiently simulate coupled problems of this nature, as in various geological and engineering applications. The lattice Boltzmann method incorporates a large eddy simulation technique using the Smagorinsky turbulence model. The discrete element method incorporates spherical and polyhedral particles for stiff contact interactions. A neo-Hookean hyperelastic model is used for the deformable solid. We provide a detailed description of how to couple the three solvers within a unified algorithm. The technique we propose for rubber modeling/coupling exploits a simplification that prevents having to solve a finite-element problem at each time step. We also developed a technique to reduce the domain size of the full system by replacing certain zones with quasi-analytic solutions, which act as effective boundary conditions for the lattice Boltzmann method. The major ingredients of the routine are separately validated. To demonstrate the coupled method in full, we simulate slurry flows in two kinds of piston valve geometries. The dynamics of the valve and slurry are studied and reported over a large range of input parameters.
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
Exploring quantum computing application to satellite data assimilation
NASA Astrophysics Data System (ADS)
Cheung, S.; Zhang, S. Q.
2015-12-01
This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.
A Simple Label Switching Algorithm for Semisupervised Structural SVMs.
Balamurugan, P; Shevade, Shirish; Sundararajan, S
2015-10-01
In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.
1988-01-01
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, S. R.; Hole, M. J.; Dewar, R. L.
2007-05-15
A generalized energy principle for finite-pressure, toroidal magnetohydrodynamic (MHD) equilibria in general three-dimensional configurations is proposed. The full set of ideal-MHD constraints is applied only on a discrete set of toroidal magnetic surfaces (invariant tori), which act as barriers against leakage of magnetic flux, helicity, and pressure through chaotic field-line transport. It is argued that a necessary condition for such invariant tori to exist is that they have fixed, irrational rotational transforms. In the toroidal domains bounded by these surfaces, full Taylor relaxation is assumed, thus leading to Beltrami fields {nabla}xB={lambda}B, where {lambda} is constant within each domain. Two distinctmore » eigenvalue problems for {lambda} arise in this formulation, depending on whether fluxes and helicity are fixed, or boundary rotational transforms. These are studied in cylindrical geometry and in a three-dimensional toroidal region of annular cross section. In the latter case, an application of a residue criterion is used to determine the threshold for connected chaos.« less
NASA Astrophysics Data System (ADS)
Chen, Xudong
2010-07-01
This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.
Mobile Autonomous Sensing Unit (MASU): A Framework That Supports Distributed Pervasive Data Sensing
Medina, Esunly; Lopez, David; Meseguer, Roc; Ochoa, Sergio F.; Royo, Dolors; Santos, Rodrigo
2016-01-01
Pervasive data sensing is a major issue that transverses various research areas and application domains. It allows identifying people’s behaviour and patterns without overwhelming the monitored persons. Although there are many pervasive data sensing applications, they are typically focused on addressing specific problems in a single application domain, making them difficult to generalize or reuse. On the other hand, the platforms for supporting pervasive data sensing impose restrictions to the devices and operational environments that make them unsuitable for monitoring loosely-coupled or fully distributed work. In order to help address this challenge this paper present a framework that supports distributed pervasive data sensing in a generic way. Developers can use this framework to facilitate the implementations of their applications, thus reducing complexity and effort in such an activity. The framework was evaluated using simulations and also through an empirical test, and the obtained results indicate that it is useful to support such a sensing activity in loosely-coupled or fully distributed work scenarios. PMID:27409617
Temporal reasoning over clinical text: the state of the art
Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem
2013-01-01
Objectives To provide an overview of the problem of temporal reasoning over clinical text and to summarize the state of the art in clinical natural language processing for this task. Target audience This overview targets medical informatics researchers who are unfamiliar with the problems and applications of temporal reasoning over clinical text. Scope We review the major applications of text-based temporal reasoning, describe the challenges for software systems handling temporal information in clinical text, and give an overview of the state of the art. Finally, we present some perspectives on future research directions that emerged during the recent community-wide challenge on text-based temporal reasoning in the clinical domain. PMID:23676245
A Hybrid Constraint Representation and Reasoning Framework
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wanlin
2004-01-01
In this paper, we introduce JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint system with a runtime software environment and improving its applicability. We describe how JNET is applied to a real-world problem - NASA's Earth-science data processing domain, and demonstrate how JNET can be extended, without any knowledge of how it is implemented, to meet the growing demands of real-world applications.
Spectral element method for elastic and acoustic waves in frequency domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Linlin; Zhou, Yuanguo; Wang, Jia-Min
Numerical techniques in time domain are widespread in seismic and acoustic modeling. In some applications, however, frequency-domain techniques can be advantageous over the time-domain approach when narrow band results are desired, especially if multiple sources can be handled more conveniently in the frequency domain. Moreover, the medium attenuation effects can be more accurately and conveniently modeled in the frequency domain. In this paper, we present a spectral-element method (SEM) in frequency domain to simulate elastic and acoustic waves in anisotropic, heterogeneous, and lossy media. The SEM is based upon the finite-element framework and has exponential convergence because of the usemore » of GLL basis functions. The anisotropic perfectly matched layer is employed to truncate the boundary for unbounded problems. Compared with the conventional finite-element method, the number of unknowns in the SEM is significantly reduced, and higher order accuracy is obtained due to its spectral accuracy. To account for the acoustic-solid interaction, the domain decomposition method (DDM) based upon the discontinuous Galerkin spectral-element method is proposed. Numerical experiments show the proposed method can be an efficient alternative for accurate calculation of elastic and acoustic waves in frequency domain.« less
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Functional specifications for AI software tools for electric power applications. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faught, W.S.
1985-08-01
The principle barrier to the introduction of artificial intelligence (AI) technology to the electric power industry has not been a lack of interest or appropriate problems, for the industry abounds in both. Like most others, however, the electric power industry lacks the personnel - knowledge engineers - with the special combination of training and skills AI programming demands. Conversely, very few AI specialists are conversant with electric power industry problems and applications. The recent availability of sophisticated AI programming environments is doing much to alleviate this shortage. These products provide a set of powerful and usable software tools that enablemore » even non-AI scientists to rapidly develop AI applications. The purpose of this project was to develop functional specifications for programming tools that, when integrated with existing general-purpose knowledge engineering tools, would expedite the production of AI applications for the electric power industry. Twelve potential applications, representative of major problem domains within the nuclear power industry, were analyzed in order to identify those tools that would be of greatest value in application development. Eight tools were specified, including facilities for power plant modeling, data base inquiry, simulation and machine-machine interface.« less
Solving Partial Differential Equations on Overlapping Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henshaw, W D
2008-09-22
We discuss the solution of partial differential equations (PDEs) on overlapping grids. This is a powerful technique for efficiently solving problems in complex, possibly moving, geometry. An overlapping grid consists of a set of structured grids that overlap and cover the computational domain. By allowing the grids to overlap, grids for complex geometries can be more easily constructed. The overlapping grid approach can also be used to remove coordinate singularities by, for example, covering a sphere with two or more patches. We describe the application of the overlapping grid approach to a variety of different problems. These include the solutionmore » of incompressible fluid flows with moving and deforming geometry, the solution of high-speed compressible reactive flow with rigid bodies using adaptive mesh refinement (AMR), and the solution of the time-domain Maxwell's equations of electromagnetism.« less
Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
Coordinating complex decision support activities across distributed applications
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1994-01-01
Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.
Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization
Marai, G. Elisabeta
2018-01-01
Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550
Techniques for determining physical zones of influence
Hamann, Hendrik F; Lopez-Marrero, Vanessa
2013-11-26
Techniques for analyzing flow of a quantity in a given domain are provided. In one aspect, a method for modeling regions in a domain affected by a flow of a quantity is provided which includes the following steps. A physical representation of the domain is provided. A grid that contains a plurality of grid-points in the domain is created. Sources are identified in the domain. Given a vector field that defines a direction of flow of the quantity within the domain, a boundary value problem is defined for each of one or more of the sources identified in the domain. Each of the boundary value problems is solved numerically to obtain a solution for the boundary value problems at each of the grid-points. The boundary problem solutions are post-processed to model the regions affected by the flow of the quantity on the physical representation of the domain.
Human-computer interface incorporating personal and application domains
Anderson, Thomas G [Albuquerque, NM
2011-03-29
The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.
Human-computer interface incorporating personal and application domains
Anderson, Thomas G.
2004-04-20
The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.
1994-01-01
Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
NASA Astrophysics Data System (ADS)
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
Vessel classification in overhead satellite imagery using weighted "bag of visual words"
NASA Astrophysics Data System (ADS)
Parameswaran, Shibin; Rainey, Katie
2015-05-01
Vessel type classification in maritime imagery is a challenging problem and has applications to many military and surveillance applications. The ability to classify a vessel correctly varies significantly depending on its appearance which in turn is affected by external factors such as lighting or weather conditions, viewing geometry and sea state. The difficulty in classifying vessels also varies among different ship types as some types of vessels show more within-class variation than others. In our previous work, we showed that the bag of visual words" (V-BoW) was an effective feature representation for this classification task in the maritime domain. The V-BoW feature representation is analogous to the bag of words" (BoW) representation used in information retrieval (IR) application in text or natural language processing (NLP) domain. It has been shown in the textual IR applications that the performance of the BoW feature representation can be improved significantly by applying appropriate term-weighting such as log term frequency, inverse document frequency etc. Given the close correspondence between textual BoW (T-BoW) and V-BoW feature representations, we propose to apply several well-known term weighting schemes from the text IR domain on V-BoW feature representation to increase its ability to discriminate between ship types.
Adaptivity and smart algorithms for fluid-structure interaction
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley
1990-01-01
This paper reviews new approaches in CFD which have the potential for significantly increasing current capabilities of modeling complex flow phenomena and of treating difficult problems in fluid-structure interaction. These approaches are based on the notions of adaptive methods and smart algorithms, which use instantaneous measures of the quality and other features of the numerical flowfields as a basis for making changes in the structure of the computational grid and of algorithms designed to function on the grid. The application of these new techniques to several problem classes are addressed, including problems with moving boundaries, fluid-structure interaction in high-speed turbine flows, flow in domains with receding boundaries, and related problems.
Naturally selecting solutions: the use of genetic algorithms in bioinformatics.
Manning, Timmy; Sleator, Roy D; Walsh, Paul
2013-01-01
For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.
Decision-Making and Problem-Solving Approaches in Pharmacy Education
Martin, Lindsay C.; Holdford, David A.
2016-01-01
Domain 3 of the Center for the Advancement of Pharmacy Education (CAPE) 2013 Educational Outcomes recommends that pharmacy school curricula prepare students to be better problem solvers, but are silent on the type of problems they should be prepared to solve. We identified five basic approaches to problem solving in the curriculum at a pharmacy school: clinical, ethical, managerial, economic, and legal. These approaches were compared to determine a generic process that could be applied to all pharmacy decisions. Although there were similarities in the approaches, generic problem solving processes may not work for all problems. Successful problem solving requires identification of the problems faced and application of the right approach to the situation. We also advocate that the CAPE Outcomes make explicit the importance of different approaches to problem solving. Future pharmacists will need multiple approaches to problem solving to adapt to the complexity of health care. PMID:27170823
Decision-Making and Problem-Solving Approaches in Pharmacy Education.
Martin, Lindsay C; Donohoe, Krista L; Holdford, David A
2016-04-25
Domain 3 of the Center for the Advancement of Pharmacy Education (CAPE) 2013 Educational Outcomes recommends that pharmacy school curricula prepare students to be better problem solvers, but are silent on the type of problems they should be prepared to solve. We identified five basic approaches to problem solving in the curriculum at a pharmacy school: clinical, ethical, managerial, economic, and legal. These approaches were compared to determine a generic process that could be applied to all pharmacy decisions. Although there were similarities in the approaches, generic problem solving processes may not work for all problems. Successful problem solving requires identification of the problems faced and application of the right approach to the situation. We also advocate that the CAPE Outcomes make explicit the importance of different approaches to problem solving. Future pharmacists will need multiple approaches to problem solving to adapt to the complexity of health care.
NASA Astrophysics Data System (ADS)
Boski, Marcin; Paszke, Wojciech
2015-11-01
This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.
An iterative solver for the 3D Helmholtz equation
NASA Astrophysics Data System (ADS)
Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir
2017-09-01
We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.
Simulation tools for robotics research and assessment
NASA Astrophysics Data System (ADS)
Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.
2016-05-01
The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrada, J.J.; Osborne-Lee, I.W.; Grizzaffi, P.A.
Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applicationsmore » of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.« less
Dynamic pathway modeling of signal transduction networks: a domain-oriented approach.
Conzelmann, Holger; Gilles, Ernst-Dieter
2008-01-01
Mathematical models of biological processes become more and more important in biology. The aim is a holistic understanding of how processes such as cellular communication, cell division, regulation, homeostasis, or adaptation work, how they are regulated, and how they react to perturbations. The great complexity of most of these processes necessitates the generation of mathematical models in order to address these questions. In this chapter we provide an introduction to basic principles of dynamic modeling and highlight both problems and chances of dynamic modeling in biology. The main focus will be on modeling of s transduction pathways, which requires the application of a special modeling approach. A common pattern, especially in eukaryotic signaling systems, is the formation of multi protein signaling complexes. Even for a small number of interacting proteins the number of distinguishable molecular species can be extremely high. This combinatorial complexity is due to the great number of distinct binding domains of many receptors and scaffold proteins involved in signal transduction. However, these problems can be overcome using a new domain-oriented modeling approach, which makes it possible to handle complex and branched signaling pathways.
Extended Kalman filtering for the detection of damage in linear mechanical structures
NASA Astrophysics Data System (ADS)
Liu, X.; Escamilla-Ambrosio, P. J.; Lieven, N. A. J.
2009-09-01
This paper addresses the problem of assessing the location and extent of damage in a vibrating structure by means of vibration measurements. Frequency domain identification methods (e.g. finite element model updating) have been widely used in this area while time domain methods such as the extended Kalman filter (EKF) method, are more sparsely represented. The difficulty of applying EKF in mechanical system damage identification and localisation lies in: the high computational cost, the dependence of estimation results on the initial estimation error covariance matrix P(0), the initial value of parameters to be estimated, and on the statistics of measurement noise R and process noise Q. To resolve these problems in the EKF, a multiple model adaptive estimator consisting of a bank of EKF in modal domain was designed, each filter in the bank is based on different P(0). The algorithm was iterated by using the weighted global iteration method. A fuzzy logic model was incorporated in each filter to estimate the variance of the measurement noise R. The application of the method is illustrated by simulated and real examples.
Studies Presented to Robert B. Lees by His Students. Papers in Linguistics.
ERIC Educational Resources Information Center
Sadock, Jerrold M.; Vanek, Anthony L.
This volume, dedicated to Professor Robert B. Lees on the occasion of his departure from the University of Illinois, contains 15 papers on a variety of linguistic topics: C. L. Baker, "Problems of Polarity in Counterfactuals"; Lawrence F. Bouton, "Do So: Do+Adverb"; Chin-chuan Cheng, "Domains of Phonological Rule Application"; Joseph F. Foster,…
Fuchs, Lynn S.; Fuchs, Douglas; Hamlett, Carol L.; Lambert, Warren; Stuebing, Karla; Fletcher, Jack M.
2009-01-01
The purpose of this study was to explore patterns of difficulty in 2 domains of mathematical cognition: computation and problem solving. Third graders (n = 924; 47.3% male) were representatively sampled from 89 classrooms; assessed on computation and problem solving; classified as having difficulty with computation, problem solving, both domains, or neither domain; and measured on 9 cognitive dimensions. Difficulty occurred across domains with the same prevalence as difficulty with a single domain; specific difficulty was distributed similarly across domains. Multivariate profile analysis on cognitive dimensions and chi-square tests on demographics showed that specific computational difficulty was associated with strength in language and weaknesses in attentive behavior and processing speed; problem-solving difficulty was associated with deficient language as well as race and poverty. Implications for understanding mathematics competence and for the identification and treatment of mathematics difficulties are discussed. PMID:20057912
NASA Technical Reports Server (NTRS)
Allen, Bradley P.; Holtzman, Peter L.
1987-01-01
An overview is presented of the Automated Software Development Workstation Project, an effort to explore knowledge-based approaches to increasing software productivity. The project focuses on applying the concept of domain specific automatic programming systems (D-SAPSs) to application domains at NASA's Johnson Space Center. A version of a D-SAPS developed in Phase 1 of the project for the domain of space station momentum management is described. How problems encountered during its implementation led researchers to concentrate on simplifying the process of building and extending such systems is discussed. Researchers propose to do this by attacking three observed bottlenecks in the D-SAPS development process through the increased automation of the acquisition of programming knowledge and the use of an object oriented development methodology at all stages of the program design. How these ideas are being implemented in the Bauhaus, a prototype workstation for D-SAPS development is discussed.
NASA Technical Reports Server (NTRS)
Allen, Bradley P.; Holtzman, Peter L.
1988-01-01
An overview is presented of the Automated Software Development Workstation Project, an effort to explore knowledge-based approaches to increasing software productivity. The project focuses on applying the concept of domain specific automatic programming systems (D-SAPSs) to application domains at NASA's Johnson Space Flight Center. A version of a D-SAPS developed in Phase 1 of the project for the domain of space station momentum management is described. How problems encountered during its implementation led researchers to concentrate on simplifying the process of building and extending such systems is discussed. Researchers propose to do this by attacking three observed bottlenecks in the D-SAPS development process through the increased automation of the acquisition of programming knowledge and the use of an object oriented development methodology at all stages of the program design. How these ideas are being implemented in the Bauhaus, a prototype workstation for D-SAPS development is discussed.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Lenarda, P; Paggi, M
A comprehensive computational framework based on the finite element method for the simulation of coupled hygro-thermo-mechanical problems in photovoltaic laminates is herein proposed. While the thermo-mechanical problem takes place in the three-dimensional space of the laminate, moisture diffusion occurs in a two-dimensional domain represented by the polymeric layers and by the vertical channel cracks in the solar cells. Therefore, a geometrical multi-scale solution strategy is pursued by solving the partial differential equations governing heat transfer and thermo-elasticity in the three-dimensional space, and the partial differential equation for moisture diffusion in the two dimensional domains. By exploiting a staggered scheme, the thermo-mechanical problem is solved first via a fully implicit solution scheme in space and time, with a specific treatment of the polymeric layers as zero-thickness interfaces whose constitutive response is governed by a novel thermo-visco-elastic cohesive zone model based on fractional calculus. Temperature and relative displacements along the domains where moisture diffusion takes place are then projected to the finite element model of diffusion, coupled with the thermo-mechanical problem by the temperature and crack opening dependent diffusion coefficient. The application of the proposed method to photovoltaic modules pinpoints two important physical aspects: (i) moisture diffusion in humidity freeze tests with a temperature dependent diffusivity is a much slower process than in the case of a constant diffusion coefficient; (ii) channel cracks through Silicon solar cells significantly enhance moisture diffusion and electric degradation, as confirmed by experimental tests.
Review of battery powered embedded systems design for mission-critical low-power applications
NASA Astrophysics Data System (ADS)
Malewski, Matthew; Cowell, David M. J.; Freear, Steven
2018-06-01
The applications and uses of embedded systems is increasingly pervasive. Mission and safety critical systems relying on embedded systems pose specific challenges. Embedded systems is a multi-disciplinary domain, involving both hardware and software. Systems need to be designed in a holistic manner so that they are able to provide the desired reliability and minimise unnecessary complexity. The large problem landscape means that there is no one solution that fits all applications of embedded systems. With the primary focus of these mission and safety critical systems being functionality and reliability, there can be conflicts with business needs, and this can introduce pressures to reduce cost at the expense of reliability and functionality. This paper examines the challenges faced by battery powered systems, and then explores at more general problems, and several real-world embedded systems.
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2002-01-01
A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
FDTD method for laser absorption in metals for large scale problems.
Deng, Chun; Ki, Hyungson
2013-10-21
The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.
NASA Astrophysics Data System (ADS)
Sabater, A. B.; Rhoads, J. F.
2017-02-01
The parametric system identification of macroscale resonators operating in a nonlinear response regime can be a challenging research problem, but at the micro- and nanoscales, experimental constraints add additional complexities. For example, due to the small and noisy signals micro/nanoresonators produce, a lock-in amplifier is commonly used to characterize the amplitude and phase responses of the systems. While the lock-in enables detection, it also prohibits the use of established time-domain, multi-harmonic, and frequency-domain methods, which rely upon time-domain measurements. As such, the only methods that can be used for parametric system identification are those based on fitting experimental data to an approximate solution, typically derived via perturbation methods and/or Galerkin methods, of a reduced-order model. Thus, one could view the parametric system identification of micro/nanosystems operating in a nonlinear response regime as the amalgamation of four coupled sub-problems: nonparametric system identification, or proper experimental design and data acquisition; the generation of physically consistent reduced-order models; the calculation of accurate approximate responses; and the application of nonlinear least-squares parameter estimation. This work is focused on the theoretical foundations that underpin each of these sub-problems, as the methods used to address one sub-problem can strongly influence the results of another. To provide context, an electromagnetically transduced microresonator is used as an example. This example provides a concrete reference for the presented findings and conclusions.
ERIC Educational Resources Information Center
Roesch, Frank; Nerb, Josef; Riess, Werner
2015-01-01
Our study investigated whether problem-oriented designed ecology lessons with phases of direct instruction and of open experimentation foster the development of cross-domain and domain-specific components of "experimental problem-solving ability" better than conventional lessons in science. We used a paper-and-pencil test to assess…
A Model-Free No-arbitrage Price Bound for Variance Options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr; Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu
2013-08-01
We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.
Typed Linear Chain Conditional Random Fields and Their Application to Intrusion Detection
NASA Astrophysics Data System (ADS)
Elfers, Carsten; Horstmann, Mirko; Sohr, Karsten; Herzog, Otthein
Intrusion detection in computer networks faces the problem of a large number of both false alarms and unrecognized attacks. To improve the precision of detection, various machine learning techniques have been proposed. However, one critical issue is that the amount of reference data that contains serious intrusions is very sparse. In this paper we present an inference process with linear chain conditional random fields that aims to solve this problem by using domain knowledge about the alerts of different intrusion sensors represented in an ontology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1999-02-10
Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and theymore » suggest that EPSAs may be more robust on larger, more complex problems.« less
Biologically inspired intelligent decision making
Manning, Timmy; Sleator, Roy D; Walsh, Paul
2014-01-01
Artificial neural networks (ANNs) are a class of powerful machine learning models for classification and function approximation which have analogs in nature. An ANN learns to map stimuli to responses through repeated evaluation of exemplars of the mapping. This learning approach results in networks which are recognized for their noise tolerance and ability to generalize meaningful responses for novel stimuli. It is these properties of ANNs which make them appealing for applications to bioinformatics problems where interpretation of data may not always be obvious, and where the domain knowledge required for deductive techniques is incomplete or can cause a combinatorial explosion of rules. In this paper, we provide an introduction to artificial neural network theory and review some interesting recent applications to bioinformatics problems. PMID:24335433
A frequency-domain seismic blind deconvolution based on Gini correlations
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing
2018-02-01
In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.
NASA Astrophysics Data System (ADS)
Kosterina, E. A.
2018-01-01
The situation of leakage of a polluting liquid from a longitudinal crack of the pipeline lying on the ground surface is considered. The two-dimensional nonstationary mathematical model is based on the mass balance equation in terms of pressure, which is satisfied in a domain with an unknown moving boundary. This area corresponds to the area of contaminated zone. A function characterizing the region of action of the equation is introduced, which makes it possible to obtain the formulation of the problem in a fixed domain. Two types of finite-difference approximation of the problem statement are proposed. They differ by approximation of the convective term. Counter-current approximation and approximation along characteristics are used. The results of computational experiments, which are in favor of using the method of characteristics, are presented. The methods application is illustrated by an example of spread of oil pollution.
A numerical algorithm for MHD of free surface flows at low magnetic Reynolds numbers
NASA Astrophysics Data System (ADS)
Samulyak, Roman; Du, Jian; Glimm, James; Xu, Zhiliang
2007-10-01
We have developed a numerical algorithm and computational software for the study of magnetohydrodynamics (MHD) of free surface flows at low magnetic Reynolds numbers. The governing system of equations is a coupled hyperbolic-elliptic system in moving and geometrically complex domains. The numerical algorithm employs the method of front tracking and the Riemann problem for material interfaces, second order Godunov-type hyperbolic solvers, and the embedded boundary method for the elliptic problem in complex domains. The numerical algorithm has been implemented as an MHD extension of FronTier, a hydrodynamic code with free interface support. The code is applicable for numerical simulations of free surface flows of conductive liquids or weakly ionized plasmas. The code has been validated through the comparison of numerical simulations of a liquid metal jet in a non-uniform magnetic field with experiments and theory. Simulations of the Muon Collider/Neutrino Factory target have also been discussed.
MacDonald, G; Mackenzie, J A; Nolan, M; Insall, R H
2016-03-15
In this paper, we devise a moving mesh finite element method for the approximate solution of coupled bulk-surface reaction-diffusion equations on an evolving two dimensional domain. Fundamental to the success of the method is the robust generation of bulk and surface meshes. For this purpose, we use a novel moving mesh partial differential equation (MMPDE) approach. The developed method is applied to model problems with known analytical solutions; these experiments indicate second-order spatial and temporal accuracy. Coupled bulk-surface problems occur frequently in many areas; in particular, in the modelling of eukaryotic cell migration and chemotaxis. We apply the method to a model of the two-way interaction of a migrating cell in a chemotactic field, where the bulk region corresponds to the extracellular region and the surface to the cell membrane.
Kang, Youn-Ah; Stasko, J
2012-12-01
While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addressed. These findings inform design implications for future systems.
APGEN Scheduling: 15 Years of Experience in Planning Automation
NASA Technical Reports Server (NTRS)
Maldague, Pierre F.; Wissler, Steve; Lenda, Matthew; Finnerty, Daniel
2014-01-01
In this paper, we discuss the scheduling capability of APGEN (Activity Plan Generator), a multi-mission planning application that is part of the NASA AMMOS (Advanced Multi- Mission Operations System), and how APGEN scheduling evolved over its applications to specific Space Missions. Our analysis identifies two major reasons for the successful application of APGEN scheduling to real problems: an expressive DSL (Domain-Specific Language) for formulating scheduling algorithms, and a well-defined process for enlisting the help of auxiliary modeling tools in providing high-fidelity, system-level simulations of the combined spacecraft and ground support system.
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2011-06-01
Hypercomplex approaches are seeing increased application to signal and image processing problems. The use of multicomponent hypercomplex numbers, such as quaternions, enables the simultaneous co-processing of multiple signal or image components. This joint processing capability can provide improved exploitation of the information contained in the data, thereby leading to improved performance in detection and recognition problems. In this paper, we apply hypercomplex processing techniques to the logo image recognition problem. Specifically, we develop an image matcher by generalizing classical phase correlation to the biquaternion case. We further incorporate biquaternion Fourier domain alpha-rooting enhancement to create Alpha-Rooted Biquaternion Phase Correlation (ARBPC). We present the mathematical properties which justify use of ARBPC as an image matcher. We present numerical performance results of a logo verification problem using real-world logo data, demonstrating the performance improvement obtained using the hypercomplex approach. We compare results of the hypercomplex approach to standard multi-template matching approaches.
Artificial Neural Network Based Mission Planning Mechanism for Spacecraft
NASA Astrophysics Data System (ADS)
Li, Zhaoyu; Xu, Rui; Cui, Pingyuan; Zhu, Shengying
2018-04-01
The ability to plan and react fast in dynamic space environments is central to intelligent behavior of spacecraft. For space and robotic applications, many planners have been used. But it is difficult to encode the domain knowledge and directly use existing techniques such as heuristic to improve the performance of the application systems. Therefore, regarding planning as an advanced control problem, this paper first proposes an autonomous mission planning and action selection mechanism through a multiple layer perceptron neural network approach to select actions in planning process and improve efficiency. To prove the availability and effectiveness, we use autonomous mission planning problems of the spacecraft, which is a sophisticated system with complex subsystems and constraints as an example. Simulation results have shown that artificial neural networks (ANNs) are usable for planning problems. Compared with the existing planning method in EUROPA, the mechanism using ANNs is more efficient and can guarantee stable performance. Therefore, the mechanism proposed in this paper is more suitable for planning problems of spacecraft that require real time and stability.
Wieland, Natalie; Baker, B L
2010-07-01
Children with intellectual disability (ID) have been found to be at an increased risk for developing behavioural problems. The purpose of this study was to examine the relationship between the marital domain, including marital quality and spousal support, and behaviour problems in children with and without ID. The relationship between the marital domain and child behaviour problems was examined in 132 families of 6-year-olds with and without ID. Using hierarchical regression, these relationships were also studied over time from child ages 6-8 years. Child behaviour problems were assessed with mother-reported Child Behavior Checklist. The marital domain was measured using the Dyadic Adjustment Scale-7 and the Spousal Support and Agreement Scale. Mother-reported parenting stress and observed parenting practices were tested as potential mediators of the relationship between the marital domain and child behaviour problems. Mean levels of the marital domain were not significantly different between typically developing (TD) and ID groups, but there were significantly greater levels of variance in reported marital quality in the ID group at ages 6, 7 and 8. The marital domain score at child age 6 years predicted child behaviour problems at age 8 for the TD group only. This predictive relationship appeared to be a unidirectional effect, as child behaviour problems at age 6 were not found to predict levels of the marital domain at age 8. Parenting stress partially mediated this relationship for the TD group. The marital domain may have a greater impact on behavioural outcomes for TD children. Implications for future research and interventions are discussed.
Giant plasma membrane vesicles: models for understanding membrane organization.
Levental, Kandice R; Levental, Ilya
2015-01-01
The organization of eukaryotic membranes into functional domains continues to fascinate and puzzle cell biologists and biophysicists. The lipid raft hypothesis proposes that collective lipid interactions compartmentalize the membrane into coexisting liquid domains that are central to membrane physiology. This hypothesis has proven controversial because such structures cannot be directly visualized in live cells by light microscopy. The recent observations of liquid-liquid phase separation in biological membranes are an important validation of the raft hypothesis and enable application of the experimental toolbox of membrane physics to a biologically complex phase-separated membrane. This review addresses the role of giant plasma membrane vesicles (GPMVs) in refining the raft hypothesis and expands on the application of GPMVs as an experimental model to answer some of key outstanding problems in membrane biology. Copyright © 2015 Elsevier Inc. All rights reserved.
The role of modern control theory in the design of controls for aircraft turbine engines
NASA Technical Reports Server (NTRS)
Zeller, J.; Lehtinen, B.; Merrill, W.
1982-01-01
The development, applications, and current research in modern control theory (MCT) are reviewed, noting the importance for fuel-efficient operation of turbines with variable inlet guide vanes, compressor stators, and exhaust nozzle area. The evolution of multivariable propulsion control design is examined, noting a basis in a matrix formulation of the differential equations defining the process, leading to state space formulations. Reports and papers which appeared from 1970-1982 which dealt with problems in MCT applications to turbine engine control design are outlined, including works on linear quadratic regulator methods, frequency domain methods, identification, estimation, and model reduction, detection, isolation, and accommodation, and state space control, adaptive control, and optimization approaches. Finally, NASA programs in frequency domain design, sensor failure detection, computer-aided control design, and plant modeling are explored
A boundary-value problem for a first-order hyperbolic system in a two-dimensional domain
NASA Astrophysics Data System (ADS)
Zhura, N. A.; Soldatov, A. P.
2017-06-01
We consider a strictly hyperbolic first-order system of three equations with constant coefficients in a bounded piecewise-smooth domain. The boundary of the domain is assumed to consist of six smooth non-characteristic arcs. A boundary-value problem in this domain is posed by alternately prescribing one or two linear combinations of the components of the solution on these arcs. We show that this problem has a unique solution under certain additional conditions on the coefficients of these combinations, the boundary of the domain and the behaviour of the solution near the characteristics passing through the corner points of the domain.
NASA Technical Reports Server (NTRS)
Szczur, Martha R.
1989-01-01
The Transportable Applications Environment Plus (TAE Plus), developed by NASA's Goddard Space Flight Center, is a portable User Interface Management System (UIMS), which provides an intuitive WYSIWYG WorkBench for prototyping and designing an application's user interface, integrated with tools for efficiently implementing the designed user interface and effective management of the user interface during an application's active domain. During the development of TAE Plus, many design and implementation decisions were based on the state-of-the-art within graphics workstations, windowing system and object-oriented programming languages. Some of the problems and issues experienced during implementation are discussed. A description of the next development steps planned for TAE Plus is also given.
Robust Feedback Control of Flow Induced Structural Radiation of Sound
NASA Technical Reports Server (NTRS)
Heatwole, Craig M.; Bernhard, Robert J.; Franchek, Matthew A.
1997-01-01
A significant component of the interior noise of aircraft and automobiles is a result of turbulent boundary layer excitation of the vehicular structure. In this work, active robust feedback control of the noise due to this non-predictable excitation is investigated. Both an analytical model and experimental investigations are used to determine the characteristics of the flow induced structural sound radiation problem. The problem is shown to be broadband in nature with large system uncertainties associated with the various operating conditions. Furthermore the delay associated with sound propagation is shown to restrict the use of microphone feedback. The state of the art control methodologies, IL synthesis and adaptive feedback control, are evaluated and shown to have limited success for solving this problem. A robust frequency domain controller design methodology is developed for the problem of sound radiated from turbulent flow driven plates. The control design methodology uses frequency domain sequential loop shaping techniques. System uncertainty, sound pressure level reduction performance, and actuator constraints are included in the design process. Using this design method, phase lag was added using non-minimum phase zeros such that the beneficial plant dynamics could be used. This general control approach has application to lightly damped vibration and sound radiation problems where there are high bandwidth control objectives requiring a low controller DC gain and controller order.
NASA Astrophysics Data System (ADS)
Nikooeinejad, Z.; Delavarkhalafi, A.; Heydari, M.
2018-03-01
The difficulty of solving the min-max optimal control problems (M-MOCPs) with uncertainty using generalised Euler-Lagrange equations is caused by the combination of split boundary conditions, nonlinear differential equations and the manner in which the final time is treated. In this investigation, the shifted Jacobi pseudospectral method (SJPM) as a numerical technique for solving two-point boundary value problems (TPBVPs) in M-MOCPs for several boundary states is proposed. At first, a novel framework of approximate solutions which satisfied the split boundary conditions automatically for various boundary states is presented. Then, by applying the generalised Euler-Lagrange equations and expanding the required approximate solutions as elements of shifted Jacobi polynomials, finding a solution of TPBVPs in nonlinear M-MOCPs with uncertainty is reduced to the solution of a system of algebraic equations. Moreover, the Jacobi polynomials are particularly useful for boundary value problems in unbounded domain, which allow us to solve infinite- as well as finite and free final time problems by domain truncation method. Some numerical examples are given to demonstrate the accuracy and efficiency of the proposed method. A comparative study between the proposed method and other existing methods shows that the SJPM is simple and accurate.
Resolution enhancement of robust Bayesian pre-stack inversion in the frequency domain
NASA Astrophysics Data System (ADS)
Yin, Xingyao; Li, Kun; Zong, Zhaoyun
2016-10-01
AVO/AVA (amplitude variation with an offset or angle) inversion is one of the most practical and useful approaches to estimating model parameters. So far, publications on AVO inversion in the Fourier domain have been quite limited in view of its poor stability and sensitivity to noise compared with time-domain inversion. For the resolution and stability of AVO inversion in the Fourier domain, a novel robust Bayesian pre-stack AVO inversion based on the mixed domain formulation of stationary convolution is proposed which could solve the instability and achieve superior resolution. The Fourier operator will be integrated into the objective equation and it avoids the Fourier inverse transform in our inversion process. Furthermore, the background constraints of model parameters are taken into consideration to improve the stability and reliability of inversion which could compensate for the low-frequency components of seismic signals. Besides, the different frequency components of seismic signals can realize decoupling automatically. This will help us to solve the inverse problem by means of multi-component successive iterations and the convergence precision of the inverse problem could be improved. So, superior resolution compared with the conventional time-domain pre-stack inversion could be achieved easily. Synthetic tests illustrate that the proposed method could achieve high-resolution results with a high degree of agreement with the theoretical model and verify the quality of anti-noise. Finally, applications on a field data case demonstrate that the proposed method could obtain stable inversion results of elastic parameters from pre-stack seismic data in conformity with the real logging data.
Sinc-Galerkin estimation of diffusivity in parabolic problems
NASA Technical Reports Server (NTRS)
Smith, Ralph C.; Bowers, Kenneth L.
1991-01-01
A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
1993-03-01
representation is needed to characterize such signature. Pseudo Wigner - Ville distribution is ideally suited for portraying non-stationary signal in the...features jointly in time and frequency. 14, SUBJECT TERIMS 15. NUMBER OF PAGES Pseudo Wigner - Ville Distribution , Analytic Signal, 83 Hilbert Transform...D U C T IO N ............................................................................ . 1 II. PSEUDO WIGNER - VILLE DISTRIBUTION
ERIC Educational Resources Information Center
Lee, Chun-Yi
2015-01-01
Problem Statement: In recent years, many educators and researchers in the field of education have made efforts to leverage the advantages provided by online peer assessment, leading to its extensive application in a range of domains, particularly higher education. However, studies on the roles of the reviewer and author in online peer assessment…
ERIC Educational Resources Information Center
Duan, Lian
2012-01-01
Finding the most interesting correlations among items is essential for problems in many commercial, medical, and scientific domains. For example, what kinds of items should be recommended with regard to what has been purchased by a customer? How to arrange the store shelf in order to increase sales? How to partition the whole social network into…
Survival probability for a diffusive process on a growing domain
NASA Astrophysics Data System (ADS)
Simpson, Matthew J.; Sharp, Jesse A.; Baker, Ruth E.
2015-04-01
We consider the motion of a diffusive population on a growing domain, 0
Online Feature Transformation Learning for Cross-Domain Object Category Recognition.
Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold
2017-06-09
In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
NASA Astrophysics Data System (ADS)
Perepelkin, Eugene; Tarelkin, Aleksandr
2018-02-01
A magnetostatics problem arises when searching for the distribution of the magnetic field generated by magnet systems of many physics research facilities, e.g., accelerators. The domain in which the boundary-value problem is solved often has a piecewise smooth boundary. In this case, numerical calculations of the problem require consideration of the solution behavior in the corner domain. In this work we obtained an upper estimation of the magnetic field growth using integral formulation of the magnetostatic problem and propose a method for condensing the differential mesh near the corner domain of the vacuum in the three-dimensional space based on this estimation.
An optimal control method for fluid structure interaction systems via adjoint boundary pressure
NASA Astrophysics Data System (ADS)
Chirco, L.; Da Vià, R.; Manservisi, S.
2017-11-01
In recent year, in spite of the computational complexity, Fluid-structure interaction (FSI) problems have been widely studied due to their applicability in science and engineering. Fluid-structure interaction systems consist of one or more solid structures that deform by interacting with a surrounding fluid flow. FSI simulations evaluate the tensional state of the mechanical component and take into account the effects of the solid deformations on the motion of the interior fluids. The inverse FSI problem can be described as the achievement of a certain objective by changing some design parameters such as forces, boundary conditions and geometrical domain shapes. In this paper we would like to study the inverse FSI problem by using an optimal control approach. In particular we propose a pressure boundary optimal control method based on Lagrangian multipliers and adjoint variables. The objective is the minimization of a solid domain displacement matching functional obtained by finding the optimal pressure on the inlet boundary. The optimality system is derived from the first order necessary conditions by taking the Fréchet derivatives of the Lagrangian with respect to all the variables involved. The optimal solution is then obtained through a standard steepest descent algorithm applied to the optimality system. The approach presented in this work is general and could be used to assess other objective functionals and controls. In order to support the proposed approach we perform a few numerical tests where the fluid pressure on the domain inlet controls the displacement that occurs in a well defined region of the solid domain.
Reflection of a Year Long Model-Driven Business and UI Modeling Development Project
NASA Astrophysics Data System (ADS)
Sukaviriya, Noi; Mani, Senthil; Sinha, Vibha
Model-driven software development enables users to specify an application at a high level - a level that better matches problem domain. It also promises the users with better analysis and automation. Our work embarks on two collaborating domains - business process and human interactions - to build an application. Business modeling expresses business operations and flows then creates business flow implementation. Human interaction modeling expresses a UI design, its relationship with business data, logic, and flow, and can generate working UI. This double modeling approach automates the production of a working system with UI and business logic connected. This paper discusses the human aspects of this modeling approach after a year long of building a procurement outsourcing contract application using the approach - the result of which was deployed in December 2008. The paper discusses in multiple areas the happy endings and some heartache. We end with insights on how a model-driven approach could do better for humans in the process.
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
Hill, Douglas L; Miller, Victoria A; Hexem, Kari R; Carroll, Karen W; Faerber, Jennifer A; Kang, Tammy; Feudtner, Chris
2015-10-01
The quality of shared decision making for children with serious illness may depend on whether parents and physicians share similar perceptions of problems and hopes for the child. (i) Describe the problems and hopes reported by mothers, fathers and physicians of children receiving palliative care; (ii) examine the observed concordance between participants; (iii) examine parental perceived agreement; and (iv) examine whether parents who identified specific problems also specified corresponding hopes, or whether the problems were left 'hopeless'. Seventy-one parents and 43 physicians were asked to report problems and hopes and perceived agreement for 50 children receiving palliative care. Problems and hopes were classified into eight domains. Observed concordance was calculated between parents and between each parent and the physicians. The most common problem domains were physical body (88%), quality of life (74%) and medical knowledge (48%). The most common hope domains were quality of life (88%), suffering (76%) and physical body (39%). Overall parental dyads demonstrated a high percentage of concordance (82%) regarding reported problem domains and a lower percentage of concordance on hopes (65%). Concordance between parents and physicians regarding specific children was lower on problem (65-66%) and hope domains (59-63%). Respondents who identified problems regarding a child's quality of life or suffering were likely to also report corresponding hopes in these domains (93 and 82%, respectively). Asking parents and physicians to talk about problems and hopes may provide a straightforward means to improve the quality of shared decision making for critically ill children. © 2013 John Wiley & Sons Ltd.
Application of different variants of the BEM in numerical modeling of bioheat transfer problems.
Majchrzak, Ewa
2013-09-01
Heat transfer processes proceeding in the living organisms are described by the different mathematical models. In particular, the typical continuous model of bioheat transfer bases on the most popular Pennes equation, but the Cattaneo-Vernotte equation and the dual phase lag equation are also used. It should be pointed out that in parallel are also examined the vascular models, and then for the large blood vessels and tissue domain the energy equations are formulated separately. In the paper the different variants of the boundary element method as a tool of numerical solution of bioheat transfer problems are discussed. For the steady state problems and the vascular models the classical BEM algorithm and also the multiple reciprocity BEM are presented. For the transient problems connected with the heating of tissue, the various tissue models are considered for which the 1st scheme of the BEM, the BEM using discretization in time and the general BEM are applied. Examples of computations illustrate the possibilities of practical applications of boundary element method in the scope of bioheat transfer problems.
Development of Condensing Mesh Method for Corner Domain at Numerical Simulation Magnetic System
NASA Astrophysics Data System (ADS)
Perepelkin, E.; Tarelkin, A.; Polyakova, R.; Kovalenko, A.
2018-05-01
A magnetostatic problem arises in searching for the distribution of the magnetic field generated by magnet systems of many physics research facilities, e.g., accelerators. The domain in which the boundaryvalue problem is solved often has a piecewise smooth boundary. In this case, numerical calculations of the problem require the consideration of the solution behavior in the corner domain. In this work we obtained the upper estimation of the magnetic field growth and propose a method of condensing the differential grid near the corner domain of vacuum in case of 3-dimensional space based on this estimation. An example of calculating a real model problem for SDP NICA in the domain containing a corner point is given.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-06-01
In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
Effective matrix-free preconditioning for the augmented immersed interface method
NASA Astrophysics Data System (ADS)
Xia, Jianlin; Li, Zhilin; Ye, Xin
2015-12-01
We present effective and efficient matrix-free preconditioning techniques for the augmented immersed interface method (AIIM). AIIM has been developed recently and is shown to be very effective for interface problems and problems on irregular domains. GMRES is often used to solve for the augmented variable(s) associated with a Schur complement A in AIIM that is defined along the interface or the irregular boundary. The efficiency of AIIM relies on how quickly the system for A can be solved. For some applications, there are substantial difficulties involved, such as the slow convergence of GMRES (particularly for free boundary and moving interface problems), and the inconvenience in finding a preconditioner (due to the situation that only the products of A and vectors are available). Here, we propose matrix-free structured preconditioning techniques for AIIM via adaptive randomized sampling, using only the products of A and vectors to construct a hierarchically semiseparable matrix approximation to A. Several improvements over existing schemes are shown so as to enhance the efficiency and also avoid potential instability. The significance of the preconditioners includes: (1) they do not require the entries of A or the multiplication of AT with vectors; (2) constructing the preconditioners needs only O (log N) matrix-vector products and O (N) storage, where N is the size of A; (3) applying the preconditioners needs only O (N) flops; (4) they are very flexible and do not require any a priori knowledge of the structure of A. The preconditioners are observed to significantly accelerate the convergence of GMRES, with heuristical justifications of the effectiveness. Comprehensive tests on several important applications are provided, such as Navier-Stokes equations on irregular domains with traction boundary conditions, interface problems in incompressible flows, mixed boundary problems, and free boundary problems. The preconditioning techniques are also useful for several other problems and methods.
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
NASA Technical Reports Server (NTRS)
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons with available analytical and numerical results, the stability and high accuracy of these dynamic analysis techniques are established.
Co-evolution for Problem Simplification
NASA Technical Reports Server (NTRS)
Haith, Gary L.; Lohn, Jason D.; Cplombano, Silvano P.; Stassinopoulos, Dimitris
1999-01-01
This paper explores a co-evolutionary approach applicable to difficult problems with limited failure/success performance feedback. Like familiar "predator-prey" frameworks this algorithm evolves two populations of individuals - the solutions (predators) and the problems (prey). The approach extends previous work by rewarding only the problems that match their difficulty to the level of solut,ion competence. In complex problem domains with limited feedback, this "tractability constraint" helps provide an adaptive fitness gradient that, effectively differentiates the candidate solutions. The algorithm generates selective pressure toward the evolution of increasingly competent solutions by rewarding solution generality and uniqueness and problem tractability and difficulty. Relative (inverse-fitness) and absolute (static objective function) approaches to evaluating problem difficulty are explored and discussed. On a simple control task, this co-evolutionary algorithm was found to have significant advantages over a genetic algorithm with either a static fitness function or a fitness function that changes on a hand-tuned schedule.
A wavelet domain adaptive image watermarking method based on chaotic encryption
NASA Astrophysics Data System (ADS)
Wei, Fang; Liu, Jian; Cao, Hanqiang; Yang, Jun
2009-10-01
A digital watermarking technique is a specific branch of steganography, which can be used in various applications, provides a novel way to solve security problems for multimedia information. In this paper, we proposed a kind of wavelet domain adaptive image digital watermarking method using chaotic stream encrypt and human eye visual property. The secret information that can be seen as a watermarking is hidden into a host image, which can be publicly accessed, so the transportation of the secret information will not attract the attention of illegal receiver. The experimental results show that the method is invisible and robust against some image processing.
NASA Astrophysics Data System (ADS)
Doerr, Martin; Freitas, Fred; Guizzardi, Giancarlo; Han, Hyoil
Ontology is a cross-disciplinary field concerned with the study of concepts and theories that can be used for representing shared conceptualizations of specific domains. Ontological Engineering is a discipline in computer and information science concerned with the development of techniques, methods, languages and tools for the systematic construction of concrete artifacts capturing these representations, i.e., models (e.g., domain ontologies) and metamodels (e.g., upper-level ontologies). In recent years, there has been a growing interest in the application of formal ontology and ontological engineering to solve modeling problems in diverse areas in computer science such as software and data engineering, knowledge representation, natural language processing, information science, among many others.
Gutierrez, Juan B; Lai, Ming-Jun; Slavov, George
2015-12-01
We study a time dependent partial differential equation (PDE) which arises from classic models in ecology involving logistic growth with Allee effect by introducing a discrete weak solution. Existence, uniqueness and stability of the discrete weak solutions are discussed. We use bivariate splines to approximate the discrete weak solution of the nonlinear PDE. A computational algorithm is designed to solve this PDE. A convergence analysis of the algorithm is presented. We present some simulations of population development over some irregular domains. Finally, we discuss applications in epidemiology and other ecological problems. Copyright © 2015 Elsevier Inc. All rights reserved.
Rapid prototyping and AI programming environments applied to payload modeling
NASA Technical Reports Server (NTRS)
Carnahan, Richard S., Jr.; Mendler, Andrew P.
1987-01-01
This effort focused on using artificial intelligence (AI) programming environments and rapid prototyping to aid in both space flight manned and unmanned payload simulation and training. Significant problems addressed are the large amount of development time required to design and implement just one of these payload simulations and the relative inflexibility of the resulting model to accepting future modification. Results of this effort have suggested that both rapid prototyping and AI programming environments can significantly reduce development time and cost when applied to the domain of payload modeling for crew training. The techniques employed are applicable to a variety of domains where models or simulations are required.
Bae, Seongryu; Lee, Sangyoon; Lee, Sungchul; Jung, Songee; Makino, Keitaro; Park, Hyuntae; Shimada, Hiroyuki
2018-06-01
We examined the role of social frailty in the association between hearing problems and mild cognitive impairment (MCI), and investigated which cognitive impairment domains are most strongly involved. Participants were 4251 older adults (mean age 72.5 ± 5.2 years, 46.1% male) who met the study inclusion criteria. Hearing problems were measured using the Hearing Handicap Inventory for the Elderly. Social frailty was identified using responses to five questions. Participants were divided into four groups depending on the presence of social frailty and hearing problems: control, social frailty, hearing problem, and co-occurrence. We assessed memory, attention, executive function, and processing speed using the National Center for Geriatrics and Gerontology-Functional Assessment Tool. Participants were categorized into normal cognition, single- and multiple-domain MCI, depending on the number of impaired cognitive domains. Participants with multiple-domain MCI exhibited the highest odds ratios (OR) of the co-occurrence group (OR: 3.89, 95% confidence intervals [CI]: 1.96-7.72), followed by the social frailty (OR: 2.65, 95% CI: 1.49-4.67), and hearing problem (OR: 1.90, 95% CI: 1.08-3.34) groups, compared with the control group. However, single-domain MCI was not significantly associated with any group. Cognitive domain analysis revealed that impaired executive function and processing speed were associated with the co-occurrence, hearing problem, and social frailty groups, respectively. Social frailty and hearing problems were independently associated with multiple-domain MCI. Comorbid conditions were more strongly associated with multiple-domain MCI. Longitudinal studies are needed to elucidate the causal role of social frailty in the association between hearing impairment and MCI. Copyright © 2018 Elsevier B.V. All rights reserved.
Maldonado, José Alberto; Marcos, Mar; Fernández-Breis, Jesualdo Tomás; Parcero, Estíbaliz; Boscá, Diego; Legaz-García, María Del Carmen; Martínez-Salvador, Begoña; Robles, Montserrat
2016-01-01
The heterogeneity of clinical data is a key problem in the sharing and reuse of Electronic Health Record (EHR) data. We approach this problem through the combined use of EHR standards and semantic web technologies, concretely by means of clinical data transformation applications that convert EHR data in proprietary format, first into clinical information models based on archetypes, and then into RDF/OWL extracts which can be used for automated reasoning. In this paper we describe a proof-of-concept platform to facilitate the (re)configuration of such clinical data transformation applications. The platform is built upon a number of web services dealing with transformations at different levels (such as normalization or abstraction), and relies on a collection of reusable mappings designed to solve specific transformation steps in a particular clinical domain. The platform has been used in the development of two different data transformation applications in the area of colorectal cancer.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
VOP memory management in MPEG-4
NASA Astrophysics Data System (ADS)
Vaithianathan, Karthikeyan; Panchanathan, Sethuraman
2001-03-01
MPEG-4 is a multimedia standard that requires Video Object Planes (VOPs). Generation of VOPs for any kind of video sequence is still a challenging problem that largely remains unsolved. Nevertheless, if this problem is treated by imposing certain constraints, solutions for specific application domains can be found. MPEG-4 applications in mobile devices is one such domain where the opposite goals namely low power and high throughput are required to be met. Efficient memory management plays a major role in reducing the power consumption. Specifically, efficient memory management for VOPs is difficult because the lifetimes of these objects vary and these life times may be overlapping. Varying life times of the objects requires dynamic memory management where memory fragmentation is a key problem that needs to be addressed. In general, memory management systems address this problem by following a combination of strategy, policy and mechanism. For MPEG4 based mobile devices that lack instruction processors, a hardware based memory management solution is necessary. In MPEG4 based mobile devices that have a RISC processor, using a Real time operating system (RTOS) for this memory management task is not expected to be efficient because the strategies and policies used by the ROTS is often tuned for handling memory segments of smaller sizes compared to object sizes. Hence, a memory management scheme specifically tuned for VOPs is important. In this paper, different strategies, policies and mechanisms for memory management are considered and an efficient combination is proposed for the case of VOP memory management along with a hardware architecture, which can handle the proposed combination.
Empirical results on scheduling and dynamic backtracking
NASA Technical Reports Server (NTRS)
Boddy, Mark S.; Goldman, Robert P.
1994-01-01
At the Honeywell Technology Center (HTC), we have been working on a scheduling problem related to commercial avionics. This application is large, complex, and hard to solve. To be a little more concrete: 'large' means almost 20,000 activities, 'complex' means several activity types, periodic behavior, and assorted types of temporal constraints, and 'hard to solve' means that we have been unable to eliminate backtracking through the use of search heuristics. At this point, we can generate solutions, where solutions exist, or report failure and sometimes why the system failed. To the best of our knowledge, this is among the largest and most complex scheduling problems to have been solved as a constraint satisfaction problem, at least that has appeared in the published literature. This abstract is a preliminary report on what we have done and how. In the next section, we present our approach to treating scheduling as a constraint satisfaction problem. The following sections present the application in more detail and describe how we solve scheduling problems in the application domain. The implemented system makes use of Ginsberg's Dynamic Backtracking algorithm, with some minor extensions to improve its utility for scheduling. We describe those extensions and the performance of the resulting system. The paper concludes with some general remarks, open questions and plans for future work.
Wave Propagation, Scattering and Imaging Using Dual-domain One-way and One-return Propagators
NASA Astrophysics Data System (ADS)
Wu, R.-S.
- Dual-domain one-way propagators implement wave propagation in heterogeneous media in mixed domains (space-wavenumber domains). One-way propagators neglect wave reverberations between heterogeneities but correctly handle the forward multiple-scattering including focusing/defocusing, diffraction, refraction and interference of waves. The algorithm shuttles between space-domain and wavenumber-domain using FFT, and the operations in the two domains are self-adaptive to the complexity of the media. The method makes the best use of the operations in each domain, resulting in efficient and accurate propagators. Due to recent progress, new versions of dual-domain methods overcame some limitations of the classical dual-domain methods (phase-screen or split-step Fourier methods) and can propagate large-angle waves quite accurately in media with strong velocity contrasts. These methods can deliver superior image quality (high resolution/high fidelity) for complex subsurface structures. One-way and one-return (De Wolf approximation) propagators can be also applied to wave-field modeling and simulations for some geophysical problems. In the article, a historical review and theoretical analysis of the Born, Rytov, and De Wolf approximations are given. A review on classical phase-screen or split-step Fourier methods is also given, followed by a summary and analysis of the new dual-domain propagators. The applications of the new propagators to seismic imaging and modeling are reviewed with several examples. For seismic imaging, the advantages and limitations of the traditional Kirchhoff migration and time-space domain finite-difference migration, when applied to 3-D complicated structures, are first analyzed. Then the special features, and applications of the new dual-domain methods are presented. Three versions of GSP (generalized screen propagators), the hybrid pseudo-screen, the wide-angle Padé-screen, and the higher-order generalized screen propagators are discussed. Recent progress also makes it possible to use the dual-domain propagators for modeling elastic reflections for complex structures and long-range propagations of crustal guided waves. Examples of 2-D and 3-D imaging and modeling using GSP methods are given.
Learning Kriging by an instructive program.
NASA Astrophysics Data System (ADS)
Cuador, José
2016-04-01
There are three types of problem classification: the deterministic, the approximated and the stochastic problems. First, in the deterministic problems the law of the phenomenon and the data are known in the entire domain and for each instant of time. In the approximated problems, the law of the phenomenon behavior is unknown but the data can be known in the entire domain and for each instant of time. In the stochastic problems much of the law and the data are unknown in the domain, so in this case the spatial behavior of the data can only be explained with probabilistic laws. This is the most important reason why the students of geo-sciences careers and others related careers need to take courses in advance estimation methods. A good example of this situation is the estimation grades in ore mineral deposit for which the Geostatistics was formalized by G. Matheron in 1962 [6]. Geostatistics is defined as the application of the theory of Random Function to the recognition and estimation of natural phenomenon [4]. Nowadays, Geostatistics is widely used in several fields of earth sciences, for example: Mining, Oil exploration, Environment, Agricultural, Forest and others [3]. It provides a wide variety of tools for spatial data analysis and allows analysing models which are subjected to degrees of uncertainty with the rigor of mathematics and formal statistical analysis [9]. Adequate models for the Kriging interpolator has been developed according to the data behavior; however there are two key steps in applying this interpolator properly: the semivariogram determination and the Kriging neighborhood selection. The main objective of this paper is to present these two elements using an instructive program.
SIENA Customer Problem Statement and Requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
L. Sauer; R. Clay; C. Adams
2000-08-01
This document describes the problem domain and functional requirements of the SIENA framework. The software requirements and system architecture of SIENA are specified in separate documents (called SIENA Software Requirement Specification and SIENA Software Architecture, respectively). While currently this version of the document describes the problems and captures the requirements within the Analysis domain (concentrating on finite element models), it is our intention to subsequent y expand this document to describe problems and capture requirements from the Design and Manufacturing domains. In addition, SIENA is designed to be extendible to support and integrate elements from the other domains (see SIENAmore » Software Architecture document).« less
Fast immersed interface Poisson solver for 3D unbounded problems around arbitrary geometries
NASA Astrophysics Data System (ADS)
Gillis, T.; Winckelmans, G.; Chatelain, P.
2018-02-01
We present a fast and efficient Fourier-based solver for the Poisson problem around an arbitrary geometry in an unbounded 3D domain. This solver merges two rewarding approaches, the lattice Green's function method and the immersed interface method, using the Sherman-Morrison-Woodbury decomposition formula. The method is intended to be second order up to the boundary. This is verified on two potential flow benchmarks. We also further analyse the iterative process and the convergence behavior of the proposed algorithm. The method is applicable to a wide range of problems involving a Poisson equation around inner bodies, which goes well beyond the present validation on potential flows.
Domain wall and isocurvature perturbation problems in a supersymmetric axion model
NASA Astrophysics Data System (ADS)
Kawasaki, Masahiro; Sonomoto, Eisuke
2018-04-01
The axion causes two serious cosmological problems, domain wall and isocurvature perturbation problems. Linde pointed out that the isocurvature perturbations are suppressed when the Peccei-Quinn (PQ) scalar field takes a large value ˜Mpl (Planck scale) during inflation. In this case, however, the PQ field with large amplitude starts to oscillate after inflation, and large fluctuations of the PQ field are produced through parametric resonance, which leads to the formation of domain walls. We consider a supersymmetric axion model and examine whether domain walls are formed by using lattice simulation. It is found that the domain wall problem does not appear in the SUSY axion model when the initial value of the PQ field is less than 1 03×v , where v is the PQ symmetry breaking scale.
NASA Technical Reports Server (NTRS)
Navon, I. M.
1984-01-01
A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.
Asymptotic analysis of dissipative waves with applications to their numerical simulation
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1990-01-01
Various problems involving the interplay of asymptotics and numerics in the analysis of wave propagation in dissipative systems are studied. A general approach to the asymptotic analysis of linear, dissipative waves is developed. It was applied to the derivation of asymptotic boundary conditions for numerical solutions on unbounded domains. Applications include the Navier-Stokes equations. Multidimensional traveling wave solutions to reaction-diffusion equations are also considered. A preliminary numerical investigation of a thermo-diffusive model of flame propagation in a channel with heat loss at the walls is presented.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
An iterated local search algorithm for the team orienteering problem with variable profits
NASA Astrophysics Data System (ADS)
Gunawan, Aldy; Ng, Kien Ming; Kendall, Graham; Lai, Junhan
2018-07-01
The orienteering problem (OP) is a routing problem that has numerous applications in various domains such as logistics and tourism. The objective is to determine a subset of vertices to visit for a vehicle so that the total collected score is maximized and a given time budget is not exceeded. The extensive application of the OP has led to many different variants, including the team orienteering problem (TOP) and the team orienteering problem with time windows. The TOP extends the OP by considering multiple vehicles. In this article, the team orienteering problem with variable profits (TOPVP) is studied. The main characteristic of the TOPVP is that the amount of score collected from a visited vertex depends on the duration of stay on that vertex. A mathematical programming model for the TOPVP is first presented and an algorithm based on iterated local search (ILS) that is able to solve modified benchmark instances is then proposed. It is concluded that ILS produces solutions which are comparable to those obtained by the commercial solver CPLEX for smaller instances. For the larger instances, ILS obtains good-quality solutions that have significantly better objective value than those found by CPLEX under reasonable computational times.
Post-treatment problems of African American breast cancer survivors.
Barsevick, Andrea M; Leader, Amy; Bradley, Patricia K; Avery, Tiffany; Dean, Lorraine T; DiCarlo, Melissa; Hegarty, Sarah E
2016-12-01
African American breast cancer survivors (AABCS) have a lower survival rate across all disease stages (79 %) compared with White survivors (92 %) and often have more aggressive forms of breast cancer requiring multimodality treatment, so they could experience a larger burden of post-treatment quality of life (QOL) problems. This paper reports a comprehensive assessment of the number, severity, and domains of problems faced by AABCS within 5 years after treatment completion and identifies subgroups at risk for these problems. A population-based random sample was obtained from the Pennsylvania Cancer Registry of African American females over 18 years of age who completed primary treatment for breast cancer in the past 5 years. A mailed survey was used to document survivorship problems. Two hundred ninety-seven AABCS completed the survey. The median number of survivor problems reported was 15. Exploratory factor analysis of the problem scale revealed four domains: emotional problems, physical problems, lack of resources, and sexuality problems. Across problem domains, younger age, more comorbid conditions, and greater medical mistrust were risk factors for more severe problems. The results demonstrated that AABCS experienced significant problem burden in the early years after diagnosis and treatment. In addition to emotional and physical problem domains that were documented in previous research, two problem domains unique to AABCS included lack of resources and sexuality concerns. At risk groups should be targeted for intervention. The study results reported in this manuscript will inform future research to address problems of AABCS as they make the transition from cancer patient to cancer survivor.
NASA Technical Reports Server (NTRS)
Masud, Abu S. M.
1991-01-01
Fellowship activities were directed towards the identification of opportunities for application of the Multiple Criteria Decision Making (MCDM) techniques in the Space Exploration Initiative (SEI) domain. I identified several application possibilities and proposed demonstration application in these three areas: evaluation and ranking of SEI architectures, space mission planning and selection, and space system design. Here, only the first problem is discussed. The most meaningful result of the analysis is the wide separation between the two top ranked architectures, indicating a significant preference difference between them. It must also be noted that the final ranking reflects, to some extent, the biases of the evaluators and their understanding of the architecture.
Note on the eigensolution of a homogeneous equation with semi-infinite domain
NASA Technical Reports Server (NTRS)
Wadia, A. R.
1980-01-01
The 'variation-iteration' method using Green's functions to find the eigenvalues and the corresponding eigenfunctions of a homogeneous Fredholm integral equation is employed for the stability analysis of fluid hydromechanics problems with a semiinfinite (infinite) domain of application. The objective of the study is to develop a suitable numerical approach to the solution of such equations in order to better understand the full set of equations for 'real-world' flow models. The study involves a search for a suitable value of the length of the domain which is a fair finite approximation to infinity, which makes the eigensolution an approximation dependent on the length of the interval chosen. In the examples investigated y = 1 = a seems to be the best approximation of infinity; for y greater than unity this method fails due to the polynomial nature of Green's functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ming, Yang; Wu, Zi-jian; Xu, Fei, E-mail: feixu@nju.edu.cn
The nonmaximally entangled state is a special kind of entangled state, which has important applications in quantum information processing. It has been generated in quantum circuits based on bulk optical elements. However, corresponding schemes in integrated quantum circuits have been rarely considered. In this Letter, we propose an effective solution for this problem. An electro-optically tunable nonmaximally mode-entangled photon state is generated in an on-chip domain-engineered lithium niobate (LN) waveguide. Spontaneous parametric down-conversion and electro-optic interaction are effectively combined through suitable domain design to transform the entangled state into our desired formation. Moreover, this is a flexible approach to entanglementmore » architectures. Other kinds of reconfigurable entanglements are also achievable through this method. LN provides a very promising platform for future quantum circuit integration.« less
Jaulent, Marie-Christine; Leprovost, Damien; Charlet, Jean; Choquet, Remy
2018-07-01
This article is a position paper dealing with semantic interoperability challenges. It addresses the Variety and Veracity dimensions when integrating, sharing and reusing large amount of heterogeneous data for data analysis and decision making applications in the healthcare domain. Many issues are raised by the necessity to conform Big Data to interoperability standards. We discuss how semantics can contribute to the improvement of information sharing and address the problem of data mediation with domain ontologies. We then introduce the main steps for building domain ontologies as they could be implemented in the context of Forensic and Legal medicine. We conclude with a particular emphasis on the current limitations in standardisation and the importance of knowledge formalization. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Chillara, Vamshi Krishna; Ren, Baiyang; Lissenden, Cliff J
2016-04-01
This article describes the use of the frequency domain finite element (FDFE) technique for guided wave mode selection in inhomogeneous waveguides. Problems with Rayleigh-Lamb and Shear-Horizontal mode excitation in isotropic homogeneous plates are first studied to demonstrate the application of the approach. Then, two specific cases of inhomogeneous waveguides are studied using FDFE. Finally, an example of guided wave mode selection for inspecting disbonds in composites is presented. Identification of sensitive and insensitive modes for defect inspection is demonstrated. As the discretization parameters affect the accuracy of the results obtained from FDFE, effect of spatial discretization and the length of the domain used for the spatial fast Fourier transform are studied. Some recommendations with regard to the choice of the above parameters are provided. Copyright © 2015 Elsevier B.V. All rights reserved.
Desbiens, Raphaël; Tremblay, Pierre; Genest, Jérôme; Bouchard, Jean-Pierre
2006-01-20
The instrument line shape (ILS) of a Fourier-transform spectrometer is expressed in a matrix form. For all line shape effects that scale with wavenumber, the ILS matrix is shown to be transposed in the spectral and interferogram domains. The novel representation of the ILS matrix in the interferogram domain yields an insightful physical interpretation of the underlying process producing self-apodization. Working in the interferogram domain circumvents the problem of taking into account the effects of finite optical path difference and permits a proper discretization of the equations. A fast algorithm in O(N log2 N), based on the fractional Fourier transform, is introduced that permits the application of a constant resolving power line shape to theoretical spectra or forward models. The ILS integration formalism is validated with experimental data.
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
Choi, Okkyung; Jung, Hanyoung; Moon, Seungbin
2014-01-01
With smartphone distribution becoming common and robotic applications on the rise, social tagging services for various applications including robotic domains have advanced significantly. Though social tagging plays an important role when users are finding the exact information through web search, reliability and semantic relation between web contents and tags are not considered. Spams are making ill use of this aspect and put irrelevant tags deliberately on contents and induce users to advertise contents when they click items of search results. Therefore, this study proposes a detection method for tag-ranking manipulation to solve the problem of the existing methods which cannot guarantee the reliability of tagging. Similarity is measured for ranking the grade of registered tag on the contents, and weighted values of each tag are measured by means of synonym relevance, frequency, and semantic distances between tags. Lastly, experimental evaluation results are provided and its efficiency and accuracy are verified through them.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
Reusable Component Model Development Approach for Parallel and Distributed Simulation
Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng
2014-01-01
Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751
Penetration of fast projectiles into resistant media: From macroscopic to subatomic projectiles
NASA Astrophysics Data System (ADS)
Gaite, José
2017-09-01
The penetration of a fast projectile into a resistant medium is a complex process that is suitable for simple modeling, in which basic physical principles can be profitably employed. This study connects two different domains: the fast motion of macroscopic bodies in resistant media and the interaction of charged subatomic particles with matter at high energies, which furnish the two limit cases of the problem of penetrating projectiles of different sizes. These limit cases actually have overlapping applications; for example, in space physics and technology. The intermediate or mesoscopic domain finds application in atom cluster implantation technology. Here it is shown that the penetration of fast nano-projectiles is ruled by a slightly modified Newton's inertial quadratic force, namely, F ∼v 2 - β, where β vanishes as the inverse of projectile diameter. Factors essential to penetration depth are ratio of projectile to medium density and projectile shape.
A knowledge-based approach to automated flow-field zoning for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Vogel, Alison Andrews
1989-01-01
An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.
Maintaining consistency between planning hierarchies: Techniques and applications
NASA Technical Reports Server (NTRS)
Zoch, David R.
1987-01-01
In many planning and scheduling environments, it is desirable to be able to view and manipulate plans at different levels of abstraction, allowing the users the option of viewing and manipulating either a very detailed representation of the plan or a high-level more abstract version of the plan. Generating a detailed plan from a more abstract plan requires domain-specific planning/scheduling knowledge; the reverse process of generating a high-level plan from a detailed plan Reverse Plan Maintenance, or RPM) requires having the system remember the actions it took based on its domain-specific knowledge and its reasons for taking those actions. This reverse plan maintenance process is described as implemented in a specific planning and scheduling tool, The Mission Operations Planning Assistant (MOPA), as well as the applications of RPM to other planning and scheduling problems; emphasizing the knowledge that is needed to maintain the correspondence between the different hierarchical planning levels.
Manning, Timmy; Sleator, Roy D; Walsh, Paul
2014-01-01
Artificial neural networks (ANNs) are a class of powerful machine learning models for classification and function approximation which have analogs in nature. An ANN learns to map stimuli to responses through repeated evaluation of exemplars of the mapping. This learning approach results in networks which are recognized for their noise tolerance and ability to generalize meaningful responses for novel stimuli. It is these properties of ANNs which make them appealing for applications to bioinformatics problems where interpretation of data may not always be obvious, and where the domain knowledge required for deductive techniques is incomplete or can cause a combinatorial explosion of rules. In this paper, we provide an introduction to artificial neural network theory and review some interesting recent applications to bioinformatics problems.
Scaling Up Decision Theoretic Planning to Planetary Rover Problems
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Dearden, Richard; Washington, Rich
2004-01-01
Because of communication limits, planetary rovers must operate autonomously during consequent durations. The ability to plan under uncertainty is one of the main components of autonomy. Previous approaches to planning under uncertainty in NASA applications are not able to address the challenges of future missions, because of several apparent limits. On another side, decision theory provides a solid principle framework for reasoning about uncertainty and rewards. Unfortunately, there are several obstacles to a direct application of decision-theoretic techniques to the rover domain. This paper focuses on the issues of structure and concurrency, and continuous state variables. We describes two techniques currently under development that address specifically these issues and allow scaling-up decision theoretic solution techniques to planetary rover planning problems involving a small number of goals.
Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D
ERIC Educational Resources Information Center
Doliotis, Paul
2013-01-01
The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…
Concepts of Concurrent Programming
1990-04-01
to the material presented. Carriero89 Carriero, N., and Gelernter, D. " How to Write Parallel Programs : A Guide to the Perplexed." ACM...between the architectures on which programs can be executed and the application domains from which problems are drawn. Our goal is to show how programs ...Sept. 1989), 251-510. Abstract: There are four papers: 1. Programming Languages for Distributed Computing Systems (52); 2. How to Write Parallel
Leaky Waves in Metamaterials for Antenna Applications
2011-07-01
excitation problems, electromagnetic fields are often represented as Sommerfeld integrals [31,32]. A detailed discussion about Sommerfeld integral is...source removed. In the rest of this section, a de- tailed discussion about Sommerfeld Integral Path is presented. 4.1 Spectral Domain Approach 4.1.1... Sommerfeld integral path for evaluating fields accurately and efficiently, the radiation intensity and directivity of electric/magnetic dipoles over a grounded
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Ancel, Ersin; Jones, Sharon Monica; Reveley, Mary S.; Luxhoj, James T.
2012-01-01
Aviation is a problem domain characterized by a high level of system complexity and uncertainty. Safety risk analysis in such a domain is especially challenging given the multitude of operations and diverse stakeholders. The Federal Aviation Administration (FAA) projects that by 2025 air traffic will increase by more than 50 percent with 1.1 billion passengers a year and more than 85,000 flights every 24 hours contributing to further delays and congestion in the sky (Circelli, 2011). This increased system complexity necessitates the application of structured safety risk analysis methods to understand and eliminate where possible, reduce, and/or mitigate risk factors. The use of expert judgments for probabilistic safety analysis in such a complex domain is necessary especially when evaluating the projected impact of future technologies, capabilities, and procedures for which current operational data may be scarce. Management of an R&D product portfolio in such a dynamic domain needs a systematic process to elicit these expert judgments, process modeling results, perform sensitivity analyses, and efficiently communicate the modeling results to decision makers. In this paper a case study focusing on the application of an R&D portfolio of aeronautical products intended to mitigate aircraft Loss of Control (LOC) accidents is presented. In particular, the knowledge elicitation process with three subject matter experts who contributed to the safety risk model is emphasized. The application and refinement of a verbal-numerical scale for conditional probability elicitation in a Bayesian Belief Network (BBN) is discussed. The preliminary findings from this initial step of a three-part elicitation are important to project management practitioners as they illustrate the vital contribution of systematic knowledge elicitation in complex domains.
NASA Technical Reports Server (NTRS)
Tolliver, C. L.
1989-01-01
The quest for the highest resolution microwave imaging and principle of time-domain imaging has been the primary motivation for recent developments in time-domain techniques. With the present technology, fast time varying signals can now be measured and recorded both in magnitude and in-phase. It has also enhanced our ability to extract relevant details concerning the scattering object. In the past, the interface of object geometry or shape for scattered signals has received substantial attention in radar technology. Various scattering theories were proposed to develop analytical solutions to this problem. Furthermore, the random inversion, frequency swept holography, and the synthetic radar imaging, have two things in common: (1) the physical optic far-field approximation, and (2) the utilization of channels as an extra physical dimension, were also advanced. Despite the inherent vectorial nature of electromagnetic waves, these scalar treatments have brought forth some promising results in practice with notable examples in subsurface and structure sounding. The development of time-domain techniques are studied through the theoretical aspects as well as experimental verification. The use of time-domain imaging for space robotic vision applications has been suggested.
Regularization techniques for backward--in--time evolutionary PDE problems
NASA Astrophysics Data System (ADS)
Gustafsson, Jonathan; Protas, Bartosz
2007-11-01
Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.
An improved cylindrical FDTD method and its application to field-tissue interaction study in MRI.
Chi, Jieru; Liu, Feng; Xia, Ling; Shao, Tingting; Mason, David G; Crozier, Stuart
2010-01-01
This paper presents a three dimensional finite-difference time-domain (FDTD) scheme in cylindrical coordinates with an improved algorithm for accommodating the numerical singularity associated with the polar axis. The regularization of this singularity problem is entirely based on Ampere's law. The proposed algorithm has been detailed and verified against a problem with a known solution obtained from a commercial electromagnetic simulation package. The numerical scheme is also illustrated by modeling high-frequency RF field-human body interactions in MRI. The results demonstrate the accuracy and capability of the proposed algorithm.
A Robust Image Watermarking in the Joint Time-Frequency Domain
NASA Astrophysics Data System (ADS)
Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın
2010-12-01
With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners formore » solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the problem of evaluating f(A)v which arises in statistical sampling. 11. As an application to the methods we developed, we tackled the problem of computing the diagonal of the inverse of a matrix. This arises in statistical applications as well as in many applications in physics. We explored probing methods as well as domain-decomposition type methods. 12. A collaboration with researchers from Toulouse, France, considered the important problem of computing the Schur complement in a domain-decomposition approach. 13. We explored new ways of preconditioning linear systems, based on low-rank approximations.« less
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
Differential Associations of UPPS-P Impulsivity Traits With Alcohol Problems.
McCarty, Kayleigh N; Morris, David H; Hatz, Laura E; McCarthy, Denis M
2017-07-01
The UPPS-P model posits that impulsivity comprises five factors: positive urgency, negative urgency, lack of planning, lack of perseverance, and sensation seeking. Negative and positive urgency are the traits most consistently associated with alcohol problems. However, previous work has examined alcohol problems either individually or in the aggregate, rather than examining multiple problem domains simultaneously. Recent work has also questioned the utility of distinguishing between positive and negative urgency, as this distinction did not meaningfully differ in predicting domains of psychopathology. The aims of this study were to address these issues by (a) testing unique associations of UPPS-P with specific domains of alcohol problems and (b) determining the utility of distinguishing between positive and negative urgency as risk factors for specific alcohol problems. Associations between UPPS-P traits and alcohol problem domains were examined in two cross-sectional data sets using negative binomial regression models. In both samples, negative urgency was associated with social/interpersonal, self-perception, risky behaviors, and blackout drinking problems. Positive urgency was associated with academic/occupational and physiological dependence problems. Both urgency traits were associated with impaired control and self-care problems. Associations for other UPPS-P traits did not replicate across samples. Results indicate that negative and positive urgency have differential associations with alcohol problem domains. Results also suggest a distinction between the type of alcohol problems associated with these traits-negative urgency was associated with problems experienced during a drinking episode, whereas positive urgency was associated with alcohol problems that result from longer-term drinking trends.
Asymptotic analysis of the narrow escape problem in dendritic spine shaped domain: three dimensions
NASA Astrophysics Data System (ADS)
Li, Xiaofei; Lee, Hyundae; Wang, Yuliang
2017-08-01
This paper deals with the three-dimensional narrow escape problem in a dendritic spine shaped domain, which is composed of a relatively big head and a thin neck. The narrow escape problem is to compute the mean first passage time of Brownian particles traveling from inside the head to the end of the neck. The original model is to solve a mixed Dirichlet-Neumann boundary value problem for the Poisson equation in the composite domain, and is computationally challenging. In this paper we seek to transfer the original problem to a mixed Robin-Neumann boundary value problem by dropping the thin neck part, and rigorously derive the asymptotic expansion of the mean first passage time with high order terms. This study is a nontrivial three-dimensional generalization of the work in Li (2014 J. Phys. A: Math. Theor. 47 505202), where a two-dimensional analogue domain is considered.
NASA Astrophysics Data System (ADS)
Hasan, Mohammed A.
1997-11-01
In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.
Fast marching methods for the continuous traveling salesman problem.
Andrews, June; Sethian, J A
2007-01-23
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ("cities") in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M.N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.
Dasgupta, Aritra; Poco, Jorge; Wei, Yaxing; ...
2015-03-16
Evaluation methodologies in visualization have mostly focused on how well the tools and techniques cater to the analytical needs of the user. While this is important in determining the effectiveness of the tools and advancing the state-of-the-art in visualization research, a key area that has mostly been overlooked is how well established visualization theories and principles are instantiated in practice. This is especially relevant when domain experts, and not visualization researchers, design visualizations for analysis of their data or for broader dissemination of scientific knowledge. There is very little research on exploring the synergistic capabilities of cross-domain collaboration between domainmore » experts and visualization researchers. To fill this gap, in this paper we describe the results of an exploratory study of climate data visualizations conducted in tight collaboration with a pool of climate scientists. The study analyzes a large set of static climate data visualizations for identifying their shortcomings in terms of visualization design. The outcome of the study is a classification scheme that categorizes the design problems in the form of a descriptive taxonomy. The taxonomy is a first attempt for systematically categorizing the types, causes, and consequences of design problems in visualizations created by domain experts. We demonstrate the use of the taxonomy for a number of purposes, such as, improving the existing climate data visualizations, reflecting on the impact of the problems for enabling domain experts in designing better visualizations, and also learning about the gaps and opportunities for future visualization research. We demonstrate the applicability of our taxonomy through a number of examples and discuss the lessons learnt and implications of our findings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Poco, Jorge; Wei, Yaxing
Evaluation methodologies in visualization have mostly focused on how well the tools and techniques cater to the analytical needs of the user. While this is important in determining the effectiveness of the tools and advancing the state-of-the-art in visualization research, a key area that has mostly been overlooked is how well established visualization theories and principles are instantiated in practice. This is especially relevant when domain experts, and not visualization researchers, design visualizations for analysis of their data or for broader dissemination of scientific knowledge. There is very little research on exploring the synergistic capabilities of cross-domain collaboration between domainmore » experts and visualization researchers. To fill this gap, in this paper we describe the results of an exploratory study of climate data visualizations conducted in tight collaboration with a pool of climate scientists. The study analyzes a large set of static climate data visualizations for identifying their shortcomings in terms of visualization design. The outcome of the study is a classification scheme that categorizes the design problems in the form of a descriptive taxonomy. The taxonomy is a first attempt for systematically categorizing the types, causes, and consequences of design problems in visualizations created by domain experts. We demonstrate the use of the taxonomy for a number of purposes, such as, improving the existing climate data visualizations, reflecting on the impact of the problems for enabling domain experts in designing better visualizations, and also learning about the gaps and opportunities for future visualization research. We demonstrate the applicability of our taxonomy through a number of examples and discuss the lessons learnt and implications of our findings.« less
Pairwise domain adaptation module for CNN-based 2-D/3-D registration.
Zheng, Jiannan; Miao, Shun; Jane Wang, Z; Liao, Rui
2018-04-01
Accurate two-dimensional to three-dimensional (2-D/3-D) registration of preoperative 3-D data and intraoperative 2-D x-ray images is a key enabler for image-guided therapy. Recent advances in 2-D/3-D registration formulate the problem as a learning-based approach and exploit the modeling power of convolutional neural networks (CNN) to significantly improve the accuracy and efficiency of 2-D/3-D registration. However, for surgery-related applications, collecting a large clinical dataset with accurate annotations for training can be very challenging or impractical. Therefore, deep learning-based 2-D/3-D registration methods are often trained with synthetically generated data, and a performance gap is often observed when testing the trained model on clinical data. We propose a pairwise domain adaptation (PDA) module to adapt the model trained on source domain (i.e., synthetic data) to target domain (i.e., clinical data) by learning domain invariant features with only a few paired real and synthetic data. The PDA module is designed to be flexible for different deep learning-based 2-D/3-D registration frameworks, and it can be plugged into any pretrained CNN model such as a simple Batch-Norm layer. The proposed PDA module has been quantitatively evaluated on two clinical applications using different frameworks of deep networks, demonstrating its significant advantages of generalizability and flexibility for 2-D/3-D medical image registration when a small number of paired real-synthetic data can be obtained.
A deep learning framework for causal shape transformation.
Lore, Kin Gwn; Stoecklein, Daniel; Davies, Michael; Ganapathysubramanian, Baskar; Sarkar, Soumik
2018-02-01
Recurrent neural network (RNN) and Long Short-term Memory (LSTM) networks are the common go-to architecture for exploiting sequential information where the output is dependent on a sequence of inputs. However, in most considered problems, the dependencies typically lie in the latent domain which may not be suitable for applications involving the prediction of a step-wise transformation sequence that is dependent on the previous states only in the visible domain with a known terminal state. We propose a hybrid architecture of convolution neural networks (CNN) and stacked autoencoders (SAE) to learn a sequence of causal actions that nonlinearly transform an input visual pattern or distribution into a target visual pattern or distribution with the same support and demonstrated its practicality in a real-world engineering problem involving the physics of fluids. We solved a high-dimensional one-to-many inverse mapping problem concerning microfluidic flow sculpting, where the use of deep learning methods as an inverse map is very seldom explored. This work serves as a fruitful use-case to applied scientists and engineers in how deep learning can be beneficial as a solution for high-dimensional physical problems, and potentially opening doors to impactful advance in fields such as material sciences and medical biology where multistep topological transformations is a key element. Copyright © 2017 Elsevier Ltd. All rights reserved.
Software-engineering challenges of building and deploying reusable problem solvers.
O'Connor, Martin J; Nyulas, Csongor; Tu, Samson; Buckeridge, David L; Okhmatovskaia, Anna; Musen, Mark A
2009-11-01
Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task-method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach.
Working With the Wave Equation in Aeroacoustics: The Pleasures of Generalized Functions
NASA Technical Reports Server (NTRS)
Farassat, F.; Brentner, Kenneth S.; Dunn, mark H.
2007-01-01
The theme of this paper is the applications of generalized function (GF) theory to the wave equation in aeroacoustics. We start with a tutorial on GFs with particular emphasis on viewing functions as continuous linear functionals. We next define operations on GFs. The operation of interest to us in this paper is generalized differentiation. We give many applications of generalized differentiation, particularly for the wave equation. We discuss the use of GFs in finding Green s function and some subtleties that only GF theory can clarify without ambiguities. We show how the knowledge of the Green s function of an operator L in a given domain D can allow us to solve a whole range of problems with operator L for domains situated within D by the imbedding method. We will show how we can use the imbedding method to find the Kirchhoff formulas for stationary and moving surfaces with ease and elegance without the use of the four-dimensional Green s theorem, which is commonly done. Other subjects covered are why the derivatives in conservation laws should be viewed as generalized derivatives and what are the consequences of doing this. In particular we show how we can imbed a problem in a larger domain for the identical differential equation for which the Green s function is known. The primary purpose of this paper is to convince the readers that GF theory is absolutely essential in aeroacoustics because of its powerful operational properties. Furthermore, learning the subject and using it can be fun.
Software-engineering challenges of building and deploying reusable problem solvers
O’CONNOR, MARTIN J.; NYULAS, CSONGOR; TU, SAMSON; BUCKERIDGE, DAVID L.; OKHMATOVSKAIA, ANNA; MUSEN, MARK A.
2012-01-01
Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task–method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach. PMID:23565031
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
Various forms of indexing HDMR for modelling multivariate classification problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aksu, Çağrı; Tunga, M. Alper
2014-12-10
The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gleicher, Frederick; Ortensi, Javier; DeHart, Mark
Accurate calculation of desired quantities to predict fuel behavior requires the solution of interlinked equations representing different physics. Traditional fuels performance codes often rely on internal empirical models for the pin power density and a simplified boundary condition on the cladding edge. These simplifications are performed because of the difficulty of coupling applications or codes on differing domains and mapping the required data. To demonstrate an approach closer to first principles, the neutronics application Rattlesnake and the thermal hydraulics application RELAP-7 were coupled to the fuels performance application BISON under the master application MAMMOTH. A single fuel pin was modeledmore » based on the dimensions of a Westinghouse 17x17 fuel rod. The simulation consisted of a depletion period of 1343 days, roughly equal to three full operating cycles, followed by a station blackout (SBO) event. The fuel rod was depleted for 1343 days for a near constant total power loading of 65.81 kW. After 1343 days the fission power was reduced to zero (simulating a reactor shut-down). Decay heat calculations provided the time-varying energy source after this time. For this problem, Rattlesnake, BISON, and RELAP-7 are coupled under MAMMOTH in a split operator approach. Each system solves its physics on a separate mesh and, for RELAP-7 and BISON, on only a subset of the full problem domain. Rattlesnake solves the neutronics over the whole domain that includes the fuel, cladding, gaps, water, and top and bottom rod holders. Here BISON is applied to the fuel and cladding with a 2D axi-symmetric domain, and RELAP-7 is applied to the flow of the circular outer water channel with a set of 1D flow equations. The mesh on the Rattlesnake side can either be 3D (for low order transport) or 2D (for diffusion). BISON has a matching ring structure mesh for the fuel so both the power density and local burn up are copied accurately from Rattlesnake. At each depletion time step, Rattlesnake calculates a power density, fission density rate, burn-up distribution and fast flux based on the current water density and fuel temperature. These are then mapped to the BISON mesh for a fuels performance solve. BISON calculates the fuel temperature and cladding surface temperature based upon the current power density and bulk fluid temperature. RELAP-7 then calculates the fluid temperature, water density fraction and water phase velocity based upon the cladding surface temperature. The fuel temperature and the fluid density are then passed back to Rattlesnake for another neutronics calculation. Six Picard or fixed-point style iterations are preformed in this manner to obtain consistent tightly coupled and stable results. For this paper a set of results from the detailed calculation are provided for both during depletion and the SBO event. We demonstrate that a detailed calculation closer to first principles can be done under MAMMOTH between different applications on differing domains.« less
Medical image segmentation using genetic algorithms.
Maulik, Ujjwal
2009-03-01
Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Creating Impact with Operations Research in Health: Making Room for Practice in Academia
Brandeau, Margaret L.
2015-01-01
Operations research (OR)-based analyses have the potential to improve decision making for many important, real-world health care problems. However, junior scholars often avoid working on practical applications in health because promotion and tenure processes tend to value theoretical studies more highly than applied studies. This paper discusses the author's experiences in using OR to inform and influence decisions in health and provides a blueprint for junior researchers who wish to find success by taking a similar path. This involves selecting good problems to study, forming productive collaborations with domain experts, developing appropriate models, identifying the most salient results from an analysis, and effectively disseminating findings to decision makers. The paper then suggests how journals, funding agencies, and senior academics can encourage such work by taking a broader and more informed view of the potential role and contributions of OR to solving health care problems. Making room in academia for the application of OR in health follows in the tradition begun by the founders of operations research: to work on important real-world problems where operations research can contribute to better decision making. PMID:26003321
NASA Astrophysics Data System (ADS)
Cheng, C. M.; Peng, Z. K.; Zhang, W. M.; Meng, G.
2017-03-01
Nonlinear problems have drawn great interest and extensive attention from engineers, physicists and mathematicians and many other scientists because most real systems are inherently nonlinear in nature. To model and analyze nonlinear systems, many mathematical theories and methods have been developed, including Volterra series. In this paper, the basic definition of the Volterra series is recapitulated, together with some frequency domain concepts which are derived from the Volterra series, including the general frequency response function (GFRF), the nonlinear output frequency response function (NOFRF), output frequency response function (OFRF) and associated frequency response function (AFRF). The relationship between the Volterra series and other nonlinear system models and nonlinear problem solving methods are discussed, including the Taylor series, Wiener series, NARMAX model, Hammerstein model, Wiener model, Wiener-Hammerstein model, harmonic balance method, perturbation method and Adomian decomposition. The challenging problems and their state of arts in the series convergence study and the kernel identification study are comprehensively introduced. In addition, a detailed review is then given on the applications of Volterra series in mechanical engineering, aeroelasticity problem, control engineering, electronic and electrical engineering.
Innovation design of medical equipment based on TRIZ.
Gao, Changqing; Guo, Leiming; Gao, Fenglan; Yang, Bo
2015-01-01
Medical equipment is closely related to personal health and safety, and this can be of concern to the equipment user. Furthermore, there is much competition among medical equipment manufacturers. Innovative design is the key to success for those enterprises. The design of medical equipment usually covers vastly different domains of knowledge. The application of modern design methodology in medical equipment and technology invention is an urgent requirement. TRIZ (Russian abbreviation of what can be translated as `theory of inventive problem solving') was born in Russia, which contain some problem-solving methods developed by patent analysis around the world, including Conflict Matrix, Substance Field Analysis, Standard Solution, Effects, etc. TRIZ is an inventive methodology for problems solving. As an Engineering example, infusion system is analyzed and re-designed by TRIZ. The innovative idea is generated to liberate the caretaker from the infusion bag watching out. The research in this paper shows the process of the application of TRIZ in medical device inventions. It is proved that TRIZ is an inventive methodology for problems solving and can be used widely in medical device development.
Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.
Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang
2017-08-25
We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.
2016-12-01
Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Based on a high order finite (spectral) element discretization, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.
NASA Astrophysics Data System (ADS)
Afanasiev, Michael; Boehm, Christian; van Driel, Martin; Krischer, Lion; May, Dave; Rietmann, Max; Fichtner, Andreas
2017-04-01
Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Currently based on an abstract implementation of high order finite (spectral) elements, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. viscoelastic, coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ template mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.
Bi-criteria travelling salesman subtour problem with time threshold
NASA Astrophysics Data System (ADS)
Kumar Thenepalle, Jayanth; Singamsetty, Purusotham
2018-03-01
This paper deals with the bi-criteria travelling salesman subtour problem with time threshold (BTSSP-T), which comes from the family of the travelling salesman problem (TSP) and is NP-hard in the strong sense. The problem arises in several application domains, mainly in routing and scheduling contexts. Here, the model focuses on two criteria: total travel distance and gains attained. The BTSSP-T aims to determine a subtour that starts and ends at the same city and visits a subset of cities at a minimum travel distance with maximum gains, such that the time spent on the tour does not exceed the predefined time threshold. A zero-one integer-programming problem is adopted to formulate this model with all practical constraints, and it includes a finite set of feasible solutions (one for each tour). Two algorithms, namely, the Lexi-Search Algorithm (LSA) and the Tabu Search (TS) algorithm have been developed to solve the BTSSP-T problem. The proposed LSA implicitly enumerates the feasible patterns and provides an efficient solution with backtracking, whereas the TS, which is metaheuristic, will give the better approximate solution. A numerical example is demonstrated in order to understand the search mechanism of the LSA. Numerical experiments are carried out in the MATLAB environment, on the different benchmark instances available in the TSPLIB domain as well as on randomly generated test instances. The experimental results show that the proposed LSA works better than the TS algorithm in terms of solution quality and, computationally, both LSA and TS are competitive.
Automating the Transformational Development of Software. Volume 1.
1983-03-01
DRACO system [Neighbors 80] uses meta-rules to derive information about which new transformations will be applicable after a particular transformation has...transformation over another. The new model, as Incorporated in a system called Glitter, explicitly represents transformation goals, methods, and selection...done anew for each new problem (compare this with Neighbor’s Draco system [Neighbors 80] which attempts to reuse domain analysis). o Is the user
A Definitive Work on Factors Impacting the Arming of Unmanned Vehicles
2005-05-01
making software, which can then adjust the system to compensate for unusual activity. Behavioral models that better predict how applications, networks ... network structure of nodes and arcs. The arcs describe relations between nodes, and nodes represent objects, concepts, or events. It uses the arc...tackling things that occur outside the known problem domain. There are some newer, natural-based approaches. One deals with neural networks . There is no
Towards a Cross-Domain MapReduce Framework
2013-11-01
These Big Data applications typically run as a set of MapReduce jobs to take advantage of Hadoop’s ease of service deployment and large-scale...parallelism. Yet, Hadoop has not been adapted for multilevel secure (MLS) environments where data of different security classifications co-exist. To solve...multilevel security. I. INTRODUCTION The US Department of Defense (DoD) and US Intelligence Community (IC) recognize they have a Big Data problem
Skill Acquisition: Compilation of Weak-Method Problem Solutions.
ERIC Educational Resources Information Center
Anderson, John R.
According to the ACT theory of skill acquisition, cognitive skills are encoded by a set of productions, which are organized according to a hierarchical goal structure. People solve problems in new domains by applying weak problem-solving procedures to declarative knowledge they have about this domain. From these initial problem solutions,…
Schmidmaier, Ralf; Eiber, Stephan; Ebersbach, Rene; Schiller, Miriam; Hege, Inga; Holzer, Matthias; Fischer, Martin R
2013-02-22
Medical knowledge encompasses both conceptual (facts or "what" information) and procedural knowledge ("how" and "why" information). Conceptual knowledge is known to be an essential prerequisite for clinical problem solving. Primarily, medical students learn from textbooks and often struggle with the process of applying their conceptual knowledge to clinical problems. Recent studies address the question of how to foster the acquisition of procedural knowledge and its application in medical education. However, little is known about the factors which predict performance in procedural knowledge tasks. Which additional factors of the learner predict performance in procedural knowledge? Domain specific conceptual knowledge (facts) in clinical nephrology was provided to 80 medical students (3rd to 5th year) using electronic flashcards in a laboratory setting. Learner characteristics were obtained by questionnaires. Procedural knowledge in clinical nephrology was assessed by key feature problems (KFP) and problem solving tasks (PST) reflecting strategic and conditional knowledge, respectively. Results in procedural knowledge tests (KFP and PST) correlated significantly with each other. In univariate analysis, performance in procedural knowledge (sum of KFP+PST) was significantly correlated with the results in (1) the conceptual knowledge test (CKT), (2) the intended future career as hospital based doctor, (3) the duration of clinical clerkships, and (4) the results in the written German National Medical Examination Part I on preclinical subjects (NME-I). After multiple regression analysis only clinical clerkship experience and NME-I performance remained independent influencing factors. Performance in procedural knowledge tests seems independent from the degree of domain specific conceptual knowledge above a certain level. Procedural knowledge may be fostered by clinical experience. More attention should be paid to the interplay of individual clinical clerkship experiences and structured teaching of procedural knowledge and its assessment in medical education curricula.
Farner, Snorre; Vergez, Christophe; Kergomard, Jean; Lizée, Aude
2006-03-01
The harmonic balance method (HBM) was originally developed for finding periodic solutions of electronical and mechanical systems under a periodic force, but has been adapted to self-sustained musical instruments. Unlike time-domain methods, this frequency-domain method does not capture transients and so is not adapted for sound synthesis. However, its independence of time makes it very useful for studying any periodic solution, whether stable or unstable, without care of particular initial conditions in time. A computer program for solving general problems involving nonlinearly coupled exciter and resonator, HARMBAL, has been developed based on the HBM. The method as well as convergence improvements and continuation facilities are thoroughly presented and discussed in the present paper. Applications of the method are demonstrated, especially on problems with severe difficulties of convergence: the Helmholtz motion (square signals) of single-reed instruments when no losses are taken into account, the reed being modeled as a simple spring.
Assessing life stressors and social resources: applications to alcoholic patients.
Moos, R H; Fenn, C B; Billings, A G; Moos, B S
A growing body of evidence points to the importance of life stressors and social resources in the development and course of alcoholism and other substance abuse disorders. This article describes the Life Stressors and Social Resources Inventory (LISRES), which provides an integrated assessment of life stressors and social resources in eight domains: physical health, home/neighborhood, financial, work, spouse/partner, children, extended family, and friends. The indices were developed on data obtained at two points in time 18 months apart from four demographically comparable groups: alcoholic patients, depressed patients, arthritic patients, and non-problem-drinking adults. As expected, alcoholic patients reported more acute and chronic stressors and fewer social resources than did non-problem-drinking adults. More important, the indices were predictively related to changes in alcohol consumption, drinking problems, depression, and self-confidence. Procedures such as the LISRES have some potential clinical and research applications and may be helpful in examining the process of recovery and relapse in substance abuse disorders.
Use of the Collaborative Optimization Architecture for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Moore, A. A.; Kroo, I. M.
1996-01-01
Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization
Merging Applicability Domains for in Silico Assessment of Chemical Mutagenicity
2014-02-04
molecular fingerprints as descriptors for developing quantitative structure−activity relationship ( QSAR ) models and defining applicability domains with...used to define and quantify an applicability domain for either method. The importance of using applicability domains in QSAR modeling cannot be...domain from roughly 80% to 90%. These results indicated that the proposed QSAR protocol constituted a highly robust chemical mutagenicity prediction
The NIMH Research Domain Criteria Initiative: Background, Issues, and Pragmatics.
Kozak, Michael J; Cuthbert, Bruce N
2016-03-01
This article describes the National Institute of Mental Health's Research Domain Criteria (RDoC) initiative. The description includes background, rationale, goals, and the way the initiative has been developed and organized. The central RDoC concepts are summarized and the current matrix of constructs that have been vetted by workshops of extramural scientists is depicted. A number of theoretical and methodological issues that can arise in connection with the nature of RDoC constructs are highlighted: subjectivism and heterophenomenology, desynchrony and theoretical neutrality among units of analysis, theoretical reductionism, endophenotypes, biomarkers, neural circuits, construct "grain size," and analytic challenges. The importance of linking RDoC constructs to psychiatric clinical problems is discussed. Some pragmatics of incorporating RDoC concepts into applications for NIMH research funding are considered, including sampling design. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Time-frequency domain SNR estimation and its application in seismic data processing
NASA Astrophysics Data System (ADS)
Zhao, Yan; Liu, Yang; Li, Xuxuan; Jiang, Nansen
2014-08-01
Based on an approach estimating frequency domain signal-to-noise ratio (FSNR), we propose a method to evaluate time-frequency domain signal-to-noise ratio (TFSNR). This method adopts short-time Fourier transform (STFT) to estimate instantaneous power spectrum of signal and noise, and thus uses their ratio to compute TFSNR. Unlike FSNR describing the variation of SNR with frequency only, TFSNR depicts the variation of SNR with time and frequency, and thus better handles non-stationary seismic data. By considering TFSNR, we develop methods to improve the effects of inverse Q filtering and high frequency noise attenuation in seismic data processing. Inverse Q filtering considering TFSNR can better solve the problem of amplitude amplification of noise. The high frequency noise attenuation method considering TFSNR, different from other de-noising methods, distinguishes and suppresses noise using an explicit criterion. Examples of synthetic and real seismic data illustrate the correctness and effectiveness of the proposed methods.
The Ames-Lockheed orbiter processing scheduling system
NASA Technical Reports Server (NTRS)
Zweben, Monte; Gargan, Robert
1991-01-01
A general purpose scheduling system and its application to Space Shuttle Orbiter Processing at the Kennedy Space Center (KSC) are described. Orbiter processing entails all the inspection, testing, repair, and maintenance necessary to prepare the Shuttle for launch and takes place within the Orbiter Processing Facility (OPF) at KSC, the Vehicle Assembly Building (VAB), and on the launch pad. The problems are extremely combinatoric in that there are thousands of tasks, resources, and other temporal considerations that must be coordinated. Researchers are building a scheduling tool that they hope will be an integral part of automating the planning and scheduling process at KSC. The scheduling engine is domain independent and is also being applied to Space Shuttle cargo processing problems as well as wind tunnel scheduling problems.
A projection method for coupling two-phase VOF and fluid structure interaction simulations
NASA Astrophysics Data System (ADS)
Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro
2018-02-01
The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.
Human Factors in Financial Trading
Leaver, Meghan; Reader, Tom W.
2016-01-01
Objective This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Background Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors–related issues in operational trading incidents. Method In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Results Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors–related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. Conclusion We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. Application This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. PMID:27142394
Qualitative Constraint Reasoning For Image Understanding
NASA Astrophysics Data System (ADS)
Perry, John L.
1987-05-01
Military planners and analysts are exceedingly concerned with increasing the effectiveness of command and control (C2) processes for battlefield management (BM). A variety of technical approaches have been taken in this effort. These approaches are intended to support and assist commanders in situation assessment, course of action generation and evaluation, and other C2 decision-making tasks. A specific task within this technology support includes the ability to effectively gather information concerning opposing forces and plan/replan tactical maneuvers. Much of the information that is gathered is image-derived, along with collateral data supporting this visual imagery. In this paper, we intend to describe a process called qualitative constraint reasoning (QCR) which is being developed as a mechanism for reasoning in the mid to high level vision domain. The essential element of QCR is the abstraction process. One of the factors that is unique to QCR is the level at which the abstraction process occurs relative to the problem domain. The computational mechanisms used in QCR belong to a general class of problem called the consistent labeling problem. The success of QCR is its ability to abstract out from a visual domain a structure appropriate for applying the labeling procedure. An example will be given that will exemplify the abstraction process for a battlefield management application. Exploratory activities are underway for investigating the suitability of QCR approach for the battlefield scenario. Further research is required to investigate the utility of QCR in a more complex battlefield environment.
Atkins, Lou; Francis, Jill; Islam, Rafat; O'Connor, Denise; Patey, Andrea; Ivers, Noah; Foy, Robbie; Duncan, Eilidh M; Colquhoun, Heather; Grimshaw, Jeremy M; Lawton, Rebecca; Michie, Susan
2017-06-21
Implementing new practices requires changes in the behaviour of relevant actors, and this is facilitated by understanding of the determinants of current and desired behaviours. The Theoretical Domains Framework (TDF) was developed by a collaboration of behavioural scientists and implementation researchers who identified theories relevant to implementation and grouped constructs from these theories into domains. The collaboration aimed to provide a comprehensive, theory-informed approach to identify determinants of behaviour. The first version was published in 2005, and a subsequent version following a validation exercise was published in 2012. This guide offers practical guidance for those who wish to apply the TDF to assess implementation problems and support intervention design. It presents a brief rationale for using a theoretical approach to investigate and address implementation problems, summarises the TDF and its development, and describes how to apply the TDF to achieve implementation objectives. Examples from the implementation research literature are presented to illustrate relevant methods and practical considerations. Researchers from Canada, the UK and Australia attended a 3-day meeting in December 2012 to build an international collaboration among researchers and decision-makers interested in the advancing use of the TDF. The participants were experienced in using the TDF to assess implementation problems, design interventions, and/or understand change processes. This guide is an output of the meeting and also draws on the authors' collective experience. Examples from the implementation research literature judged by authors to be representative of specific applications of the TDF are included in this guide. We explain and illustrate methods, with a focus on qualitative approaches, for selecting and specifying target behaviours key to implementation, selecting the study design, deciding the sampling strategy, developing study materials, collecting and analysing data, and reporting findings of TDF-based studies. Areas for development include methods for triangulating data, e.g. from interviews, questionnaires and observation and methods for designing interventions based on TDF-based problem analysis. We offer this guide to the implementation community to assist in the application of the TDF to achieve implementation objectives. Benefits of using the TDF include the provision of a theoretical basis for implementation studies, good coverage of potential reasons for slow diffusion of evidence into practice and a method for progressing from theory-based investigation to intervention.
Approximate solution of the multiple watchman routes problem with restricted visibility range.
Faigl, Jan
2010-10-01
In this paper, a new self-organizing map (SOM) based adaptation procedure is proposed to address the multiple watchman route problem with the restricted visibility range in the polygonal domain W. A watchman route is represented by a ring of connected neuron weights that evolves in W, while obstacles are considered by approximation of the shortest path. The adaptation procedure considers a coverage of W by the ring in order to attract nodes toward uncovered parts of W. The proposed procedure is experimentally verified in a set of environments and several visibility ranges. Performance of the procedure is compared with the decoupled approach based on solutions of the art gallery problem and the consecutive traveling salesman problem. The experimental results show the suitability of the proposed procedure based on relatively simple supporting geometrical structures, enabling application of the SOM principles to watchman route problems in W.
Benefits of Enterprise Ontology for the Development of ICT-Based Value Networks
NASA Astrophysics Data System (ADS)
Albani, Antonia; Dietz, Jan L. G.
The competitiveness of value networks is highly dependent on the cooperation between business partners and the interoperability of their information systems. Innovations in information and communication technology (ICT), primarily the emergence of the Internet, offer possibilities to increase the interoperability of information systems and therefore enable inter-enterprise cooperation. For the design of inter-enterprise information systems, the concept of business component appears to be very promising. However, the identification of business components is strongly dependent on the appropriateness and the quality of the underlying business domain model. The ontological model of an enterprise - or an enterprise network - as presented in this article, is a high-quality and very adequate business domain model. It provides all essential information that is necessary for the design of the supporting information systems, and at a level of abstraction that makes it also understandable for business people. The application of enterprise ontology for the identification of business components is clarified. To exemplify our approach, a practical case is taken from the domain of strategic supply network development. By doing this, a widespread problem of the practical application of inter-enterprise information systems is being addressed.
A frequency-domain approach to improve ANNs generalization quality via proper initialization.
Chaari, Majdi; Fekih, Afef; Seibi, Abdennour C; Hmida, Jalel Ben
2018-08-01
The ability to train a network without memorizing the input/output data, thereby allowing a good predictive performance when applied to unseen data, is paramount in ANN applications. In this paper, we propose a frequency-domain approach to evaluate the network initialization in terms of quality of training, i.e., generalization capabilities. As an alternative to the conventional time-domain methods, the proposed approach eliminates the approximate nature of network validation using an excess of unseen data. The benefits of the proposed approach are demonstrated using two numerical examples, where two trained networks performed similarly on the training and the validation data sets, yet they revealed a significant difference in prediction accuracy when tested using a different data set. This observation is of utmost importance in modeling applications requiring a high degree of accuracy. The efficiency of the proposed approach is further demonstrated on a real-world problem, where unlike other initialization methods, a more conclusive assessment of generalization is achieved. On the practical front, subtle methodological and implementational facets are addressed to ensure reproducibility and pinpoint the limitations of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.
1987-01-01
The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.
Neural network for intelligent query of an FBI forensic database
NASA Astrophysics Data System (ADS)
Uvanni, Lee A.; Rainey, Timothy G.; Balasubramanian, Uma; Brettle, Dean W.; Weingard, Fred; Sibert, Robert W.; Birnbaum, Eric
1997-02-01
Examiner is an automated fired cartridge case identification system utilizing a dual-use neural network pattern recognition technology, called the statistical-multiple object detection and location system (S-MODALS) developed by Booz(DOT)Allen & Hamilton, Inc. in conjunction with Rome Laboratory. S-MODALS was originally designed for automatic target recognition (ATR) of tactical and strategic military targets using multisensor fusion [electro-optical (EO), infrared (IR), and synthetic aperture radar (SAR)] sensors. Since S-MODALS is a learning system readily adaptable to problem domains other than automatic target recognition, the pattern matching problem of microscopic marks for firearms evidence was analyzed using S-MODALS. The physics; phenomenology; discrimination and search strategies; robustness requirements; error level and confidence level propagation that apply to the pattern matching problem of military targets were found to be applicable to the ballistic domain as well. The Examiner system uses S-MODALS to rank a set of queried cartridge case images from the most similar to the least similar image in reference to an investigative fired cartridge case image. The paper presents three independent tests and evaluation studies of the Examiner system utilizing the S-MODALS technology for the Federal Bureau of Investigation.
NASA Astrophysics Data System (ADS)
Kalsom Yusof, Umi; Nor Akmal Khalid, Mohd
2015-05-01
Semiconductor industries need to constantly adjust to the rapid pace of change in the market. Most manufactured products usually have a very short life cycle. These scenarios imply the need to improve the efficiency of capacity planning, an important aspect of the machine allocation plan known for its complexity. Various studies have been performed to balance productivity and flexibility in the flexible manufacturing system (FMS). Many approaches have been developed by the researchers to determine the suitable balance between exploration (global improvement) and exploitation (local improvement). However, not much work has been focused on the domain of machine allocation problem that considers the effects of machine breakdowns. This paper develops a model to minimize the effect of machine breakdowns, thus increasing the productivity. The objectives are to minimize system unbalance and makespan as well as increase throughput while satisfying the technological constraints such as machine time availability. To examine the effectiveness of the proposed model, results for throughput, system unbalance and makespan on real industrial datasets were performed with applications of intelligence techniques, that is, a hybrid of genetic algorithm and harmony search. The result aims to obtain a feasible solution to the domain problem.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
NASA Astrophysics Data System (ADS)
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
A Comparison of Solver Performance for Complex Gastric Electrophysiology Models
Sathar, Shameer; Cheng, Leo K.; Trew, Mark L.
2016-01-01
Computational techniques for solving systems of equations arising in gastric electrophysiology have not been studied for efficient solution process. We present a computationally challenging problem of simulating gastric electrophysiology in anatomically realistic stomach geometries with multiple intracellular and extracellular domains. The multiscale nature of the problem and mesh resolution required to capture geometric and functional features necessitates efficient solution methods if the problem is to be tractable. In this study, we investigated and compared several parallel preconditioners for the linear systems arising from tetrahedral discretisation of electrically isotropic and anisotropic problems, with and without stimuli. The results showed that the isotropic problem was computationally less challenging than the anisotropic problem and that the application of extracellular stimuli increased workload considerably. Preconditioning based on block Jacobi and algebraic multigrid solvers were found to have the best overall solution times and least iteration counts, respectively. The algebraic multigrid preconditioner would be expected to perform better on large problems. PMID:26736543
Xu, Wei
2007-12-01
This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.
Application of multiple grids topology to supersonic internal/external flow interactions
NASA Technical Reports Server (NTRS)
Kathong, M.; Tiwari, S. N.; Smith, R. E.
1988-01-01
For many aerodynamic applications, it is very difficult to construct a smooth body-fitted grid around complex configurations. An approach, called 'multiple grids' or 'zonal grids', which subdivides the entire physical domain into several subdomains, is used to overcome such difficulties. The approach is applied to obtain the solutions to the Euler equations for the supersonic internal/external flow around a fighter-aircraft configuration. Steady-state solutions are presented for Mach 2 at 0, 3.79, 7, and 10 deg angles-of-attack. The problem of conservative treatment at the zonal interfaces is also addressed.
ERIC Educational Resources Information Center
She, Hsiao-Ching; Cheng, Meng-Tzu; Li, Ta-Wei; Wang, Chia-Yu; Chiu, Hsin-Tien; Lee, Pei-Zon; Chou, Wen-Chi; Chuang, Ming-Hua
2012-01-01
This study investigates the effect of Web-based Chemistry Problem-Solving, with the attributes of Web-searching and problem-solving scaffolds, on undergraduate students' problem-solving task performance. In addition, the nature and extent of Web-searching strategies students used and its correlation with task performance and domain knowledge also…
Domain identification in impedance computed tomography by spline collocation method
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1990-01-01
A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.
Protecting software agents from malicious hosts using quantum computing
NASA Astrophysics Data System (ADS)
Reisner, John; Donkor, Eric
2000-07-01
We evaluate how quantum computing can be applied to security problems for software agents. Agent-based computing, which merges technological advances in artificial intelligence and mobile computing, is a rapidly growing domain, especially in applications such as electronic commerce, network management, information retrieval, and mission planning. System security is one of the more eminent research areas in agent-based computing, and the specific problem of protecting a mobile agent from a potentially hostile host is one of the most difficult of these challenges. In this work, we describe our agent model, and discuss the capabilities and limitations of classical solutions to the malicious host problem. Quantum computing may be extremely helpful in addressing the limitations of classical solutions to this problem. This paper highlights some of the areas where quantum computing could be applied to agent security.
Domain wall and isocurvature perturbation problems in axion models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yoshino, Kazuyoshi; Yanagida, Tsutomu T., E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: tsutomu.tyanagida@ipmu.jp, E-mail: yoshino@icrr.u-tokyo.ac.jp
2013-11-01
Axion models have two serious cosmological problems, domain wall and isocurvature perturbation problems. In order to solve these problems we investigate the Linde's model in which the field value of the Peccei-Quinn (PQ) scalar is large during inflation. In this model the fluctuations of the PQ field grow after inflation through the parametric resonance and stable axionic strings may be produced, which results in the domain wall problem. We study formation of axionic strings using lattice simulations. It is found that in chaotic inflation the axion model is free from both the domain wall and the isocurvature perturbation problems ifmore » the initial misalignment angle θ{sub a} is smaller than O(10{sup −2}). Furthermore, axions can also account for the dark matter for the breaking scale v ≅ 10{sup 12−16} GeV and the Hubble parameter during inflation H{sub inf}∼<10{sup 11−12} GeV in general inflation models.« less
Fast marching methods for the continuous traveling salesman problem
Andrews, June; Sethian, J. A.
2007-01-01
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points (“cities”) in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M·N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh. PMID:17220271
Fast marching methods for the continuous traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, J.; Sethian, J.A.
We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ('cities') in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the Traveling Salesman Problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both amore » heuristic and an optimal solution to this problem. The order of the heuristic algorithm is at worst case M * N logN, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.« less
The Knowledge Base of the Evaluation Domain.
ERIC Educational Resources Information Center
Seels, Barbara
This paper is concerned with defining evaluation as a domain in instructional technology, and with specifying the sub-areas of the domain. In education, evaluation is the process of determining the adequacy of instruction. It begins with problem analysis, which refers to determining the nature of the solution and the parameters of the problem. A…
A review of image quality assessment methods with application to computational photography
NASA Astrophysics Data System (ADS)
Maître, Henri
2015-12-01
Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc. We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal. We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity. However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.
NASA Astrophysics Data System (ADS)
Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.
2015-06-01
The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.
Maldonado, José Alberto; Marcos, Mar; Fernández-Breis, Jesualdo Tomás; Parcero, Estíbaliz; Boscá, Diego; Legaz-García, María del Carmen; Martínez-Salvador, Begoña; Robles, Montserrat
2016-01-01
The heterogeneity of clinical data is a key problem in the sharing and reuse of Electronic Health Record (EHR) data. We approach this problem through the combined use of EHR standards and semantic web technologies, concretely by means of clinical data transformation applications that convert EHR data in proprietary format, first into clinical information models based on archetypes, and then into RDF/OWL extracts which can be used for automated reasoning. In this paper we describe a proof-of-concept platform to facilitate the (re)configuration of such clinical data transformation applications. The platform is built upon a number of web services dealing with transformations at different levels (such as normalization or abstraction), and relies on a collection of reusable mappings designed to solve specific transformation steps in a particular clinical domain. The platform has been used in the development of two different data transformation applications in the area of colorectal cancer. PMID:28269882
Analysis of an optimization-based atomistic-to-continuum coupling method for point defects
Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...
2015-11-16
Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.
Survey of the status of finite element methods for partial differential equations
NASA Technical Reports Server (NTRS)
Temam, Roger
1986-01-01
The finite element methods (FEM) have proved to be a powerful technique for the solution of boundary value problems associated with partial differential equations of either elliptic, parabolic, or hyperbolic type. They also have a good potential for utilization on parallel computers particularly in relation to the concept of domain decomposition. This report is intended as an introduction to the FEM for the nonspecialist. It contains a survey which is totally nonexhaustive, and it also contains as an illustration, a report on some new results concerning two specific applications, namely a free boundary fluid-structure interaction problem and the Euler equations for inviscid flows.
PICsar: Particle in cell pulsar magnetosphere simulator
NASA Astrophysics Data System (ADS)
Belyaev, Mikhail A.
2016-07-01
PICsar simulates the magnetosphere of an aligned axisymmetric pulsar and can be used to simulate other arbitrary electromagnetics problems in axisymmetry. Written in Fortran, this special relativistic, electromagnetic, charge conservative particle in cell code features stretchable body-fitted coordinates that follow the surface of a sphere, simplifying the application of boundary conditions in the case of the aligned pulsar; a radiation absorbing outer boundary, which allows a steady state to be set up dynamically and maintained indefinitely from transient initial conditions; and algorithms for injection of charged particles into the simulation domain. PICsar is parallelized using MPI and has been used on research problems with 1000 CPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing
Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and racemore » classification.« less
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
Methodes iteratives paralleles: Applications en neutronique et en mecanique des fluides
NASA Astrophysics Data System (ADS)
Qaddouri, Abdessamad
Dans cette these, le calcul parallele est applique successivement a la neutronique et a la mecanique des fluides. Dans chacune de ces deux applications, des methodes iteratives sont utilisees pour resoudre le systeme d'equations algebriques resultant de la discretisation des equations du probleme physique. Dans le probleme de neutronique, le calcul des matrices des probabilites de collision (PC) ainsi qu'un schema iteratif multigroupe utilisant une methode inverse de puissance sont parallelises. Dans le probleme de mecanique des fluides, un code d'elements finis utilisant un algorithme iteratif du type GMRES preconditionne est parallelise. Cette these est presentee sous forme de six articles suivis d'une conclusion. Les cinq premiers articles traitent des applications en neutronique, articles qui representent l'evolution de notre travail dans ce domaine. Cette evolution passe par un calcul parallele des matrices des PC et un algorithme multigroupe parallele teste sur un probleme unidimensionnel (article 1), puis par deux algorithmes paralleles l'un mutiregion l'autre multigroupe, testes sur des problemes bidimensionnels (articles 2--3). Ces deux premieres etapes sont suivies par l'application de deux techniques d'acceleration, le rebalancement neutronique et la minimisation du residu aux deux algorithmes paralleles (article 4). Finalement, on a mis en oeuvre l'algorithme multigroupe et le calcul parallele des matrices des PC sur un code de production DRAGON ou les tests sont plus realistes et peuvent etre tridimensionnels (article 5). Le sixieme article (article 6), consacre a l'application a la mecanique des fluides, traite la parallelisation d'un code d'elements finis FES ou le partitionneur de graphe METIS et la librairie PSPARSLIB sont utilises.
A usability evaluation of a SNOMED CT based compositional interface terminology for intensive care.
Bakhshi-Raiez, F; de Keizer, N F; Cornet, R; Dorrepaal, M; Dongelmans, D; Jaspers, M W M
2012-05-01
To evaluate the usability of a large compositional interface terminology based on SNOMED CT and the terminology application for registration of the reasons for intensive care admission in a Patient Data Management System. Observational study with user-based usability evaluations before and 3 months after the system was implemented and routinely used. Usability was defined by five aspects: effectiveness, efficiency, learnability, overall user satisfaction, and experienced usability problems. Qualitative (the Think-Aloud user testing method) and quantitative (the System Usability Scale questionnaire and Time-on-Task analyses) methods were used to examine these usability aspects. The results of the evaluation study revealed that the usability of the interface terminology fell short (SUS scores before and after implementation of 47.2 out of 100 and 37.5 respectively out of 100). The qualitative measurements revealed a high number (n=35) of distinct usability problems, leading to ineffective and inefficient registration of reasons for admission. The effectiveness and efficiency of the system did not change over time. About 14% (n=5) of the revealed usability problems were related to the terminology content based on SNOMED CT, while the remaining 86% (n=30) was related to the terminology application. The problems related to the terminology content were more severe than the problems related to the terminology application. This study provides a detailed insight into how clinicians interact with a controlled compositional terminology through a terminology application. The extensiveness, complexity of the hierarchy, and the language usage of an interface terminology are defining for its usability. Carefully crafted domain-specific subsets and a well-designed terminology application are needed to facilitate the use of a complex compositional interface terminology based on SNOMED CT. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Simmons, Daniel; Cools, Kristof; Sewell, Phillip
2016-11-01
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.
Exact semi-separation of variables in waveguides with non-planar boundaries
NASA Astrophysics Data System (ADS)
Athanassoulis, G. A.; Papoutsellis, Ch. E.
2017-05-01
Series expansions of unknown fields Φ =∑φn Zn in elongated waveguides are commonly used in acoustics, optics, geophysics, water waves and other applications, in the context of coupled-mode theories (CMTs). The transverse functions Zn are determined by solving local Sturm-Liouville problems (reference waveguides). In most cases, the boundary conditions assigned to Zn cannot be compatible with the physical boundary conditions of Φ, leading to slowly convergent series, and rendering CMTs mild-slope approximations. In the present paper, the heuristic approach introduced in Athanassoulis & Belibassakis (Athanassoulis & Belibassakis 1999 J. Fluid Mech. 389, 275-301) is generalized and justified. It is proved that an appropriately enhanced series expansion becomes an exact, rapidly convergent representation of the field Φ, valid for any smooth, non-planar boundaries and any smooth enough Φ. This series expansion can be differentiated termwise everywhere in the domain, including the boundaries, implementing an exact semi-separation of variables for non-separable domains. The efficiency of the method is illustrated by solving a boundary value problem for the Laplace equation, and computing the corresponding Dirichlet-to-Neumann operator, involved in Hamiltonian equations for nonlinear water waves. The present method provides accurate results with only a few modes for quite general domains. Extensions to general waveguides are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removesmore » staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.« less
Evaluation of research in biomedical ontologies
Dumontier, Michel; Gkoutos, Georgios V.
2013-01-01
Ontologies are now pervasive in biomedicine, where they serve as a means to standardize terminology, to enable access to domain knowledge, to verify data consistency and to facilitate integrative analyses over heterogeneous biomedical data. For this purpose, research on biomedical ontologies applies theories and methods from diverse disciplines such as information management, knowledge representation, cognitive science, linguistics and philosophy. Depending on the desired applications in which ontologies are being applied, the evaluation of research in biomedical ontologies must follow different strategies. Here, we provide a classification of research problems in which ontologies are being applied, focusing on the use of ontologies in basic and translational research, and we demonstrate how research results in biomedical ontologies can be evaluated. The evaluation strategies depend on the desired application and measure the success of using an ontology for a particular biomedical problem. For many applications, the success can be quantified, thereby facilitating the objective evaluation and comparison of research in biomedical ontology. The objective, quantifiable comparison of research results based on scientific applications opens up the possibility for systematically improving the utility of ontologies in biomedical research. PMID:22962340
Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher
2010-11-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.
Pangalila, Robert F; van den Bos, Geertrudis A M; Bartels, Bart; Bergen, Michael P; Kampelmacher, Mike J; Stam, Henk J; Roebroeck, Marij E
2015-02-01
To assess quality of life of adults with Duchenne muscular dystrophy in the Netherlands and to identify domains and major problems influencing quality of life. Cross-sectional. Seventy-nine men aged ≥ 20 years with Duchenne muscular dystrophy. The Medical Outcome Study Short Form-36 (SF-36), World Health Organization Quality of Life - BREF (WHOQOL-BREF) and an interview were used to assess quality of life and problems. Compared with Dutch general population reference values, the SF-36 domains scores were lower on all domains except mental health and role limitations due to emotional problems. On the WHOQOL-BREF the social relationships domain score was lower. Main problems were intimate relationships, work, leisure, transport and meaningfulness of life. Seventy-three percent stated overall quality of life as "(very) good". The SF-36 domains mental health (rs 0.53, p < 0.001) and vitality (rs 0.49, p < 0.001) had the strongest associations with overall quality of life. Adult men with Duchenne muscular dystrophy assess their health status as low in the physical, but not in the mental, domains. Experienced problems are mainly in the area of participation. They are generally satisfied with their overall quality of life.
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Steven S.
1996-01-01
This final report summarizes research performed under NASA contract NCC 2-531 toward generalization of constraint-based scheduling theories and techniques for application to space telescope observation scheduling problems. Our work into theories and techniques for solution of this class of problems has led to the development of the Heuristic Scheduling Testbed System (HSTS), a software system for integrated planning and scheduling. Within HSTS, planning and scheduling are treated as two complementary aspects of the more general process of constructing a feasible set of behaviors of a target system. We have validated the HSTS approach by applying it to the generation of observation schedules for the Hubble Space Telescope. This report summarizes the HSTS framework and its application to the Hubble Space Telescope domain. First, the HSTS software architecture is described, indicating (1) how the structure and dynamics of a system is modeled in HSTS, (2) how schedules are represented at multiple levels of abstraction, and (3) the problem solving machinery that is provided. Next, the specific scheduler developed within this software architecture for detailed management of Hubble Space Telescope operations is presented. Finally, experimental performance results are given that confirm the utility and practicality of the approach.
An equivalent domain integral for analysis of two-dimensional mixed mode problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1989-01-01
An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies subjected to mixed mode loading is presented. The total and product integrals consist of the sum of an area or domain integral and line integrals on the crack faces. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all the problems analyzed.
Predicting protein-protein interactions from protein domains using a set cover approach.
Huang, Chengbang; Morcos, Faruck; Kanaan, Simon P; Wuchty, Stefan; Chen, Danny Z; Izaguirre, Jesús A
2007-01-01
One goal of contemporary proteome research is the elucidation of cellular protein interactions. Based on currently available protein-protein interaction and domain data, we introduce a novel method, Maximum Specificity Set Cover (MSSC), for the prediction of protein-protein interactions. In our approach, we map the relationship between interactions of proteins and their corresponding domain architectures to a generalized weighted set cover problem. The application of a greedy algorithm provides sets of domain interactions which explain the presence of protein interactions to the largest degree of specificity. Utilizing domain and protein interaction data of S. cerevisiae, MSSC enables prediction of previously unknown protein interactions, links that are well supported by a high tendency of coexpression and functional homogeneity of the corresponding proteins. Focusing on concrete examples, we show that MSSC reliably predicts protein interactions in well-studied molecular systems, such as the 26S proteasome and RNA polymerase II of S. cerevisiae. We also show that the quality of the predictions is comparable to the Maximum Likelihood Estimation while MSSC is faster. This new algorithm and all data sets used are accessible through a Web portal at http://ppi.cse.nd.edu.
The Challenges of Human-Autonomy Teaming
NASA Technical Reports Server (NTRS)
Vera, Alonso
2017-01-01
Machine intelligence is improving rapidly based on advances in big data analytics, deep learning algorithms, networked operations, and continuing exponential growth in computing power (Moores Law). This growth in the power and applicability of increasingly intelligent systems will change the roles humans, shifting them to tasks where adaptive problem solving, reasoning and decision-making is required. This talk will address the challenges involved in engineering autonomous systems that function effectively with humans in aeronautics domains.
Adjoint-based optimization of PDEs in moving domains
NASA Astrophysics Data System (ADS)
Protas, Bartosz; Liao, Wenyuan
2008-02-01
In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saad, Tony; Sutherland, James C.
To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
In this paper transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
Saad, Tony; Sutherland, James C.
2016-05-04
To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less
NASA Astrophysics Data System (ADS)
Adem, Abdullahi Rashid; Moawad, Salah M.
2018-05-01
In this paper, the steady-state equations of ideal magnetohydrodynamic incompressible flows in axisymmetric domains are investigated. These flows are governed by a second-order elliptic partial differential equation as a type of generalized Grad-Shafranov equation. The problem of finding exact equilibria to the full governing equations in the presence of incompressible mass flows is considered. Two different types of constraints on position variables are presented to construct exact solution classes for several nonlinear cases of the governing equations. Some of the obtained results are checked for their applications to magnetic confinement plasma. Besides, they cover many previous configurations and include new considerations about the nonlinearity of magnetic flux stream variables.
Sequentially Executed Model Evaluation Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less
Algorithms for the automatic generation of 2-D structured multi-block grids
NASA Technical Reports Server (NTRS)
Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.
1995-01-01
Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.
TeleMed: Wide-area, secure, collaborative object computing with Java and CORBA for healthcare
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.W.; George, J.E.; Gavrilov, E.M.
1998-12-31
Distributed computing is becoming commonplace in a variety of industries with healthcare being a particularly important one for society. The authors describe the development and deployment of TeleMed in a few healthcare domains. TeleMed is a 100% Java distributed application build on CORBA and OMG standards enabling the collaboration on the treatment of chronically ill patients in a secure manner over the Internet. These standards enable other systems to work interoperably with TeleMed and provide transparent access to high performance distributed computing to the healthcare domain. The goal of wide scale integration of electronic medical records is a grand-challenge scalemore » problem of global proportions with far-reaching social benefits.« less
Detecting people of interest from internet data sources
NASA Astrophysics Data System (ADS)
Cardillo, Raymond A.; Salerno, John J.
2006-04-01
In previous papers, we have documented success in determining the key people of interest from a large corpus of real-world evidence. Our recent efforts focus on exploring additional domains and data sources. Internet data sources such as email, web pages, and news feeds make it easier to gather a large corpus of documents for various domains, but detecting people of interest in these sources introduces new challenges. Analyzing these massive sources magnifies entity resolution problems, and demands a storage management strategy that supports efficient algorithmic analysis and visualization techniques. This paper discusses the techniques we used in order to analyze the ENRON email repository, which are also applicable to analyzing web pages returned from our "Buddy" meta-search engine.
Applang - A DSL for specification of mobile applications for android platform based on textX
NASA Astrophysics Data System (ADS)
Kosanović, Milan; Dejanović, Igor; Milosavljević, Gordana
2016-06-01
Mobile platforms become a ubiquitous part of our daily lives thus making more pressure to software developers to develop more applications faster and with the support for different mobile operating systems. To foster the faster development of mobile services and applications and to support various mobile operating systems a new software development approaches must be undertaken. Domain-Specific Languages (DSL) are a viable approach that promise to solve a problem of target platform diversity as well as to facilitate rapid application development and shorter time-to-market. This paper presents Applang, a DSL for the specification of mobile applications for the Android platform, based on textX meta-language. The application is described using Applang DSL and the source code for a target platform is automatically generated by the provided code generator. The same application defined using single Applang source can be transformed to various targets with little or no manual modifications.
Constraint-based integration of planning and scheduling for space-based observatory management
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Steven F.
1994-01-01
Progress toward the development of effective, practical solutions to space-based observatory scheduling problems within the HSTS scheduling framework is reported. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) short-term observation scheduling problem. The work was motivated by the limitations of the current solution and, more generally, by the insufficiency of classical planning and scheduling approaches in this problem context. HSTS has subsequently been used to develop improved heuristic solution techniques in related scheduling domains and is currently being applied to develop a scheduling tool for the upcoming Submillimeter Wave Astronomy Satellite (SWAS) mission. The salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research are summarized. Then, some key problem decomposition techniques underlying the integrated planning and scheduling approach to the HST problem are described; research results indicate that these techniques provide leverage in solving space-based observatory scheduling problems. Finally, more recently developed constraint-posting scheduling procedures and the current SWAS application focus are summarized.
NASA Astrophysics Data System (ADS)
Kim, Sungwon
Ferroelectric LiNbO3 and LiTaO3 crystals have developed, over the last 50 years as key materials for integrated and nonlinear optics due to their large electro-optic and nonlinear optical coefficients and a broad transparency range from 0.4 mum-4.5 mum wavelengths. Applications include high speed optical modulation and switching in 40GHz range, second harmonic generation, optical parametric amplification, pulse compression and so on. Ferroelectric domain microengineering has led to electro-optic scanners, dynamic focusing lenses, total internal reflection switches, and quasi-phase matched (QPM) frequency doublers. Most of these applications have so far been on non-stoichiometric compositions of these crystals. Recent breakthroughs in crystal growth have however opened up an entirely new window of opportunity from both scientific and technological viewpoint. The growth of stoichiometric composition crystals has led to the discovery of many fascinating effects arising from the presence or absence of atomic defects, such as an order of magnitude changes in coercive fields, internal fields, domain backswitching and stabilization phenomenon. On the nanoscale, unexpected features such as the presence of wide regions of optical contrast and strain have been discovered at 180° domain walls. Such strong influence of small amounts of nonstoichiometric defects on material properties has led to new device applications, particularly those involving domain patterning and shaping such as QPM devices in thick bulk crystals and improved photorefractive damage compositions. The central focus of this dissertation is to explore the role of nonstoichiometry and its precise influence on macroscale and nanoscale properties in lithium niobate and tantalate. Macroscale properties are studied using a combination of in-situ and high-speed electro-optic imaging microscopy and electrical switching experiments. Local static and dynamic strain properties at individual domain walls is studied using X-ray synchrotron imaging with and without in-situ electric fields. Nanoscale optical properties are studied using Near Field Scanning Optical Microscopy(NSOM). Finite Difference Time Domain(FDTD) codes, Beam Propagation Method(BPM) codes and X-ray tracing codes have been developed to successfully simulate NSOM images and X-ray topography images to extract the local optical and strain properties, respectively. A 3-D ferroelectric domain simulation code based on Time Dependent Ginzburg Landau(TDGL) theory and group theory has been developed to understand the nature of these local wall strains and the preferred wall orientations. By combining these experimental and numerical tools, We have also proposed a defect-dipole model and a mechanism by which the defect interacts with the domain walls. This thesis has thus built a more comprehensive picture of the influence of defects on domain walls on nanoscale and macroscale, and raises new scientific questions about the exact nature of domain walls-defect interactions. Besides the specific problem of ferroelectrics, the experimental and simulation tools, developed in this thesis will have wider application in the area of materials science.
Evaluation of coupling approaches for thermomechanical simulations
Novascone, S. R.; Spencer, B. W.; Hales, J. D.; ...
2015-08-10
Many problems of interest, particularly in the nuclear engineering field, involve coupling between the thermal and mechanical response of an engineered system. The strength of the two-way feedback between the thermal and mechanical solution fields can vary significantly depending on the problem. Contact problems exhibit a particularly high degree of two-way feedback between those fields. This paper describes and demonstrates the application of a flexible simulation environment that permits the solution of coupled physics problems using either a tightly coupled approach or a loosely coupled approach. In the tight coupling approach, Newton iterations include the coupling effects between all physics,more » while in the loosely coupled approach, the individual physics models are solved independently, and fixed-point iterations are performed until the coupled system is converged. These approaches are applied to simple demonstration problems and to realistic nuclear engineering applications. The demonstration problems consist of single and multi-domain thermomechanics with and without thermal and mechanical contact. Simulations of a reactor pressure vessel under pressurized thermal shock conditions and a simulation of light water reactor fuel are also presented. Here, problems that include thermal and mechanical contact, such as the contact between the fuel and cladding in the fuel simulation, exhibit much stronger two-way feedback between the thermal and mechanical solutions, and as a result, are better solved using a tight coupling strategy.« less
Description of waves in inhomogeneous domains using Heun's equation
NASA Astrophysics Data System (ADS)
Bednarik, M.; Cervenka, M.
2018-04-01
There are a number of model equations describing electromagnetic, acoustic or quantum waves in inhomogeneous domains and some of them are of the same type from the mathematical point of view. This isomorphism enables us to use a unified approach to solving the corresponding equations. In this paper, the inhomogeneity is represented by a trigonometric spatial distribution of a parameter determining the properties of an inhomogeneous domain. From the point of view of modeling, this trigonometric parameter function can be smoothly connected to neighboring constant-parameter regions. For this type of distribution, exact local solutions of the model equations are represented by the local Heun functions. As the interval for which the solution is sought includes two regular singular points. For this reason, a method is proposed which resolves this problem only based on the local Heun functions. Further, the transfer matrix for the considered inhomogeneous domain is determined by means of the proposed method. As an example of the applicability of the presented solutions the transmission coefficient is calculated for the locally periodic structure which is given by an array of asymmetric barriers.
Beam position monitor engineering
NASA Astrophysics Data System (ADS)
Smith, Stephen R.
1997-01-01
The design of beam position monitors often involves challenging system design choices. Position transducers must be robust, accurate, and generate adequate position signal without unduly disturbing the beam. Electronics must be reliable and affordable, usually while meeting tough requirements on precision, accuracy, and dynamic range. These requirements may be difficult to achieve simultaneously, leading the designer into interesting opportunities for optimization or compromise. Some useful techniques and tools are shown. Both finite element analysis and analytic techniques will be used to investigate quasi-static aspects of electromagnetic fields such as the impedance of and the coupling of beam to striplines or buttons. Finite-element tools will be used to understand dynamic aspects of the electromagnetic fields of beams, such as wake fields and transmission-line and cavity effects in vacuum-to-air feedthroughs. Mathematical modeling of electrical signals through a processing chain will be demonstrated, in particular to illuminate areas where neither a pure time-domain nor a pure frequency-domain analysis is obviously advantageous. Emphasis will be on calculational techniques, in particular on using both time domain and frequency domain approaches to the applicable parts of interesting problems.
Unlocking the spatial inversion of large scanning magnetic microscopy datasets
NASA Astrophysics Data System (ADS)
Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.
2013-12-01
Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.
2014-01-01
With smartphone distribution becoming common and robotic applications on the rise, social tagging services for various applications including robotic domains have advanced significantly. Though social tagging plays an important role when users are finding the exact information through web search, reliability and semantic relation between web contents and tags are not considered. Spams are making ill use of this aspect and put irrelevant tags deliberately on contents and induce users to advertise contents when they click items of search results. Therefore, this study proposes a detection method for tag-ranking manipulation to solve the problem of the existing methods which cannot guarantee the reliability of tagging. Similarity is measured for ranking the grade of registered tag on the contents, and weighted values of each tag are measured by means of synonym relevance, frequency, and semantic distances between tags. Lastly, experimental evaluation results are provided and its efficiency and accuracy are verified through them. PMID:25114975
Musiat, Peter; Goldstone, Philip; Tarrier, Nicholas
2014-04-11
E-mental health and m-mental health include the use of technology in the prevention, treatment and aftercare of mental health problems. With the economical pressure on mental health services increasing, e-mental health and m-mental health could bridge treatment gaps, reduce waiting times for patients and deliver interventions at lower costs. However, despite the existence of numerous effective interventions, the transition of computerised interventions into care is slow. The aim of the present study was to investigate the acceptability of e-mental health and m-mental health in the general population. An advisory group of service users identified dimensions that potentially influence an individual's decision to engage with a particular treatment for mental health problems. A large sample (N = 490) recruited through email, flyers and social media was asked to rate the acceptability of different treatment options for mental health problems on these domains. Results were analysed using repeated measures MANOVA. Participants rated the perceived helpfulness of an intervention, the ability to motivate users, intervention credibility, and immediate access without waiting time as most important dimensions with regard to engaging with a treatment for mental health problems. Participants expected face-to-face therapy to meet their needs on most of these dimensions. Computerised treatments and smartphone applications for mental health were reported to not meet participants' expectations on most domains. However, these interventions scored higher than face-to-face treatments on domains associated with the convenience of access. Overall, participants reported a very low likelihood of using computerised treatments for mental health in the future. Individuals in this study expressed negative views about computerised self-help intervention and low likelihood of use in the future. To improve the implementation and uptake, policy makers need to improve the public perception of such interventions.
NASA Astrophysics Data System (ADS)
Arora, Shitij; Fourment, Lionel
2018-05-01
In the context of the simulation of industrial hot forming processes, the resultant time-dependent thermo-mechanical multi-field problem (v →,p ,σ ,ɛ ) can be sped up by 10-50 times using the steady-state methods while compared to the conventional incremental methods. Though the steady-state techniques have been used in the past, but only on simple configurations and with structured meshes, and the modern-days problems are in the framework of complex configurations, unstructured meshes and parallel computing. These methods remove time dependency from the equations, but introduce an additional unknown into the problem: the steady-state shape. This steady-state shape x → can be computed as a geometric correction t → on the domain X → by solving the weak form of the steady-state equation v →.n →(t →)=0 using a Streamline Upwind Petrov Galerkin (SUPG) formulation. There exists a strong coupling between the domain shape and the material flow, hence, a two-step fixed point iterative resolution algorithm was proposed that involves (1) the computation of flow field from the resolution of thermo-mechanical equations on a prescribed domain shape and (2) the computation of steady-state shape for an assumed velocity field. The contact equations are introduced in the penalty form both during the flow computation as well as during the free-surface correction. The fact that the contact description is inhomogeneous, i.e., it is defined in the nodal form in the former, and in the weighted residual form in the latter, is assumed to be critical to the convergence of certain problems. Thus, the notion of nodal collocation is invoked in the weak form of the surface correction equation to homogenize the contact coupling. The surface correction algorithm is tested on certain analytical test cases and the contact coupling is tested with some hot rolling problems.
2014-01-01
Background E-mental health and m-mental health include the use of technology in the prevention, treatment and aftercare of mental health problems. With the economical pressure on mental health services increasing, e-mental health and m-mental health could bridge treatment gaps, reduce waiting times for patients and deliver interventions at lower costs. However, despite the existence of numerous effective interventions, the transition of computerised interventions into care is slow. The aim of the present study was to investigate the acceptability of e-mental health and m-mental health in the general population. Methods An advisory group of service users identified dimensions that potentially influence an individual’s decision to engage with a particular treatment for mental health problems. A large sample (N = 490) recruited through email, flyers and social media was asked to rate the acceptability of different treatment options for mental health problems on these domains. Results were analysed using repeated measures MANOVA. Results Participants rated the perceived helpfulness of an intervention, the ability to motivate users, intervention credibility, and immediate access without waiting time as most important dimensions with regard to engaging with a treatment for mental health problems. Participants expected face-to-face therapy to meet their needs on most of these dimensions. Computerised treatments and smartphone applications for mental health were reported to not meet participants’ expectations on most domains. However, these interventions scored higher than face-to-face treatments on domains associated with the convenience of access. Overall, participants reported a very low likelihood of using computerised treatments for mental health in the future. Conclusions Individuals in this study expressed negative views about computerised self-help intervention and low likelihood of use in the future. To improve the implementation and uptake, policy makers need to improve the public perception of such interventions. PMID:24725765
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1989-01-01
An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.
Examining, Documenting, and Modeling the Problem Space of a Variable Domain
2002-06-14
Feature-Oriented Domain Analysis ( FODA ) .............................................................................................. 9...development of this proposed process include: Feature-Oriented Domain Analysis ( FODA ) [3,4], Organization Domain Modeling (ODM) [2,5,6], Family-Oriented...configuration knowledge using generators [2]. 8 Existing Methods of Domain Engineering Feature-Oriented Domain Analysis ( FODA ) FODA is a domain
A case study in programming a quantum annealer for hard operational planning problems
NASA Astrophysics Data System (ADS)
Rieffel, Eleanor G.; Venturelli, Davide; O'Gorman, Bryan; Do, Minh B.; Prystay, Elicia M.; Smelyanskiy, Vadim N.
2015-01-01
We report on a case study in programming an early quantum annealer to attack optimization problems related to operational planning. While a number of studies have looked at the performance of quantum annealers on problems native to their architecture, and others have examined performance of select problems stemming from an application area, ours is one of the first studies of a quantum annealer's performance on parametrized families of hard problems from a practical domain. We explore two different general mappings of planning problems to quadratic unconstrained binary optimization (QUBO) problems, and apply them to two parametrized families of planning problems, navigation-type and scheduling-type. We also examine two more compact, but problem-type specific, mappings to QUBO, one for the navigation-type planning problems and one for the scheduling-type planning problems. We study embedding properties and parameter setting and examine their effect on the efficiency with which the quantum annealer solves these problems. From these results, we derive insights useful for the programming and design of future quantum annealers: problem choice, the mapping used, the properties of the embedding, and the annealing profile all matter, each significantly affecting the performance.
ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations
NASA Astrophysics Data System (ADS)
Merkel, M.; Niyonzima, I.; Schöps, S.
2017-12-01
Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.
Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements
NASA Astrophysics Data System (ADS)
Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.
2000-11-01
In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.
NASA Astrophysics Data System (ADS)
Magee, Daniel J.; Niemeyer, Kyle E.
2018-03-01
The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.
Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.
1999-11-01
A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.
ERIC Educational Resources Information Center
Kostousov, Sergei; Kudryavtsev, Dmitry
2017-01-01
Problem solving is a critical competency for modern world and also an effective way of learning. Education should not only transfer domain-specific knowledge to students, but also prepare them to solve real-life problems--to apply knowledge from one or several domains within specific situation. Problem solving as teaching tool is known for a long…
It's all about gains: Risk preferences in problem gambling.
Ring, Patrick; Probst, Catharina C; Neyse, Levent; Wolff, Stephan; Kaernbach, Christian; van Eimeren, Thilo; Camerer, Colin F; Schmidt, Ulrich
2018-06-07
Problem gambling is a serious socioeconomic problem involving high individual and social costs. In this article, we study risk preferences of problem gamblers including their risk attitudes in the gain and loss domains, their weighting of probabilities, and their degree of loss aversion. Our findings indicate that problem gamblers are systematically more risk taking and less sensitive toward changes in probabilities in the gain domain only. Neither their risk attitudes in the loss domain nor their degree of loss aversion are significantly different from the controls. Additional evidence for a similar degree of sensitivity toward negative outcomes is gained from skin conductance data-a psychophysiological marker for emotional arousal-in a threat-of-shock task. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Modeling Real-Time Applications with Reusable Design Patterns
NASA Astrophysics Data System (ADS)
Rekhis, Saoussen; Bouassida, Nadia; Bouaziz, Rafik
Real-Time (RT) applications, which manipulate important volumes of data, need to be managed with RT databases that deal with time-constrained data and time-constrained transactions. In spite of their numerous advantages, RT databases development remains a complex task, since developers must study many design issues related to the RT domain. In this paper, we tackle this problem by proposing RT design patterns that allow the modeling of structural and behavioral aspects of RT databases. We show how RT design patterns can provide design assistance through architecture reuse of reoccurring design problems. In addition, we present an UML profile that represents patterns and facilitates further their reuse. This profile proposes, on one hand, UML extensions allowing to model the variability of patterns in the RT context and, on another hand, extensions inspired from the MARTE (Modeling and Analysis of Real-Time Embedded systems) profile.
Flutter analysis using transversality theory
NASA Technical Reports Server (NTRS)
Afolabi, D.
1993-01-01
A new method of calculating flutter boundaries of undamped aeronautical structures is presented. The method is an application of the weak transversality theorem used in catastrophe theory. In the first instance, the flutter problem is cast in matrix form using a frequency domain method, leading to an eigenvalue matrix. The characteristic polynomial resulting from this matrix usually has a smooth dependence on the system's parameters. As these parameters change with operating conditions, certain critical values are reached at which flutter sets in. Our approach is to use the transversality theorem in locating such flutter boundaries using this criterion: at a flutter boundary, the characteristic polynomial does not intersect the axis of the abscissa transversally. Formulas for computing the flutter boundaries and flutter frequencies of structures with two degrees of freedom are presented, and extension to multi-degree of freedom systems is indicated. The formulas have obvious applications in, for instance, problems of panel flutter at supersonic Mach numbers.
Non-Boolean computing with nanomagnets for computer vision applications
NASA Astrophysics Data System (ADS)
Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep
2016-02-01
The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.
Birth-jump processes and application to forest fire spotting.
Hillen, T; Greese, B; Martin, J; de Vries, G
2015-01-01
Birth-jump models are designed to describe population models for which growth and spatial spread cannot be decoupled. A birth-jump model is a nonlinear integro-differential equation. We present two different derivations of this equation, one based on a random walk approach and the other based on a two-compartmental reaction-diffusion model. In the case that the redistribution kernels are highly concentrated, we show that the integro-differential equation can be approximated by a reaction-diffusion equation, in which the proliferation rate contributes to both the diffusion term and the reaction term. We completely solve the corresponding critical domain size problem and the minimal wave speed problem. Birth-jump models can be applied in many areas in mathematical biology. We highlight an application of our results in the context of forest fire spread through spotting. We show that spotting increases the invasion speed of a forest fire front.
Bialas, Andrzej
2011-01-01
Intelligent sensors experience security problems very similar to those inherent to other kinds of IT products or systems. The assurance for these products or systems creation methodologies, like Common Criteria (ISO/IEC 15408) can be used to improve the robustness of the sensor systems in high risk environments. The paper presents the background and results of the previous research on patterns-based security specifications and introduces a new ontological approach. The elaborated ontology and knowledge base were validated on the IT security development process dealing with the sensor example. The contribution of the paper concerns the application of the knowledge engineering methodology to the previously developed Common Criteria compliant and pattern-based method for intelligent sensor security development. The issue presented in the paper has a broader significance in terms that it can solve information security problems in many application domains. PMID:22164064
Bialas, Andrzej
2011-01-01
Intelligent sensors experience security problems very similar to those inherent to other kinds of IT products or systems. The assurance for these products or systems creation methodologies, like Common Criteria (ISO/IEC 15408) can be used to improve the robustness of the sensor systems in high risk environments. The paper presents the background and results of the previous research on patterns-based security specifications and introduces a new ontological approach. The elaborated ontology and knowledge base were validated on the IT security development process dealing with the sensor example. The contribution of the paper concerns the application of the knowledge engineering methodology to the previously developed Common Criteria compliant and pattern-based method for intelligent sensor security development. The issue presented in the paper has a broader significance in terms that it can solve information security problems in many application domains.
An Integrated Environment for Efficient Formal Design and Verification
NASA Technical Reports Server (NTRS)
1998-01-01
The general goal of this project was to improve the practicality of formal methods by combining techniques from model checking and theorem proving. At the time the project was proposed, the model checking and theorem proving communities were applying different tools to similar problems, but there was not much cross-fertilization. This project involved a group from SRI that had substantial experience in the development and application of theorem-proving technology, and a group at Stanford that specialized in model checking techniques. Now, over five years after the proposal was submitted, there are many research groups working on combining theorem-proving and model checking techniques, and much more communication between the model checking and theorem proving research communities. This project contributed significantly to this research trend. The research work under this project covered a variety of topics: new theory and algorithms; prototype tools; verification methodology; and applications to problems in particular domains.
Hastings, Janna; Chepelev, Leonid; Willighagen, Egon; Adams, Nico; Steinbeck, Christoph; Dumontier, Michel
2011-01-01
Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA).
Hastings, Janna; Chepelev, Leonid; Willighagen, Egon; Adams, Nico; Steinbeck, Christoph; Dumontier, Michel
2011-01-01
Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA). PMID:21991315
Multi Agent Reward Analysis for Learning in Noisy Domains
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian K.
2005-01-01
In many multi agent learning problems, it is difficult to determine, a priori, the agent reward structure that will lead to good performance. This problem is particularly pronounced in continuous, noisy domains ill-suited to simple table backup schemes commonly used in TD(lambda)/Q-learning. In this paper, we present a new reward evaluation method that allows the tradeoff between coordination among the agents and the difficulty of the learning problem each agent faces to be visualized. This method is independent of the learning algorithm and is only a function of the problem domain and the agents reward structure. We then use this reward efficiency visualization method to determine an effective reward without performing extensive simulations. We test this method in both a static and a dynamic multi-rover learning domain where the agents have continuous state spaces and where their actions are noisy (e.g., the agents movement decisions are not always carried out properly). Our results show that in the more difficult dynamic domain, the reward efficiency visualization method provides a two order of magnitude speedup in selecting a good reward. Most importantly it allows one to quickly create and verify rewards tailored to the observational limitations of the domain.
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Quasilinear parabolic variational inequalities with multi-valued lower-order terms
NASA Astrophysics Data System (ADS)
Carl, Siegfried; Le, Vy K.
2014-10-01
In this paper, we provide an analytical frame work for the following multi-valued parabolic variational inequality in a cylindrical domain : Find and an such that where is some closed and convex subset, A is a time-dependent quasilinear elliptic operator, and the multi-valued function is assumed to be upper semicontinuous only, so that Clarke's generalized gradient is included as a special case. Thus, parabolic variational-hemivariational inequalities are special cases of the problem considered here. The extension of parabolic variational-hemivariational inequalities to the general class of multi-valued problems considered in this paper is not only of disciplinary interest, but is motivated by the need in applications. The main goals are as follows. First, we provide an existence theory for the above-stated problem under coercivity assumptions. Second, in the noncoercive case, we establish an appropriate sub-supersolution method that allows us to get existence, comparison, and enclosure results. Third, the order structure of the solution set enclosed by sub-supersolutions is revealed. In particular, it is shown that the solution set within the sector of sub-supersolutions is a directed set. As an application, a multi-valued parabolic obstacle problem is treated.
Pham-The, Hai; Casañola-Martin, Gerardo; Garrigues, Teresa; Bermejo, Marival; González-Álvarez, Isabel; Nguyen-Hai, Nam; Cabrera-Pérez, Miguel Ángel; Le-Thi-Thu, Huong
2016-02-01
In many absorption, distribution, metabolism, and excretion (ADME) modeling problems, imbalanced data could negatively affect classification performance of machine learning algorithms. Solutions for handling imbalanced dataset have been proposed, but their application for ADME modeling tasks is underexplored. In this paper, various strategies including cost-sensitive learning and resampling methods were studied to tackle the moderate imbalance problem of a large Caco-2 cell permeability database. Simple physicochemical molecular descriptors were utilized for data modeling. Support vector machine classifiers were constructed and compared using multiple comparison tests. Results showed that the models developed on the basis of resampling strategies displayed better performance than the cost-sensitive classification models, especially in the case of oversampling data where misclassification rates for minority class have values of 0.11 and 0.14 for training and test set, respectively. A consensus model with enhanced applicability domain was subsequently constructed and showed improved performance. This model was used to predict a set of randomly selected high-permeability reference drugs according to the biopharmaceutics classification system. Overall, this study provides a comparison of numerous rebalancing strategies and displays the effectiveness of oversampling methods to deal with imbalanced permeability data problems.
Boundary Approximation Methods for Sloving Elliptic Problems on Unbounded Domains
NASA Astrophysics Data System (ADS)
Li, Zi-Cai; Mathon, Rudolf
1990-08-01
Boundary approximation methods with partial solutions are presented for solving a complicated problem on an unbounded domain, with both a crack singularity and a corner singularity. Also an analysis of partial solutions near the singular points is provided. These methods are easy to apply, have good stability properties, and lead to highly accurate solutions. Hence, boundary approximation methods with partial solutions are recommended for the treatment of elliptic problems on unbounded domains provided that piecewise solution expansions, in particular, asymptotic solutions near the singularities and infinity, can be found.
Addressing the computational cost of large EIT solutions.
Boyle, Alistair; Borsic, Andrea; Adler, Andy
2012-05-01
Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
Learning to Search. From Weak Methods to Domain-Specific Heuristics.
1984-09-01
move as undesirable. The remaining productions interact with MARKED-BAD, providing the labeling of states it requires for application. One of these, NOTE...to previously visited states, it did not attempt to learn from this knowledge, and simply abandoned dese undesirable pads. From the two remaining...the search strategy that SAGE employs. Many problems (such as winning a chess game ) are so complex that they can only be solved by breaking the task up
Verified OS Interface Code Synthesis
2016-12-01
in this case we are using the ARMv7 processor architecture ). The application accomplishes this task by issuing the swi (“software interrupt...manual version 4.0.0) on the ARM architecture . To alleviate this problem,we developed an XML-based domain specific language (DSL) in which each...Untyped Retype Table 2.1: seL4 Architecture Independent System Calls. of r2, r3, r4 and r5 into the message registers of the thread’s IPC buffer and
Hybrid fully nonlinear BEM-LBM numerical wave tank with applications in naval hydrodynamics
NASA Astrophysics Data System (ADS)
Mivehchi, Amin; Grilli, Stephan T.; Dahl, Jason M.; O'Reilly, Chris M.; Harris, Jeffrey C.; Kuznetsov, Konstantin; Janssen, Christian F.
2017-11-01
simulation of the complex dynamics response of ships in waves is typically modeled by nonlinear potential flow theory, usually solved with a higher order BEM. In some cases, the viscous/turbulent effects around a structure and in its wake need to be accurately modeled to capture the salient physics of the problem. Here, we present a fully 3D model based on a hybrid perturbation method. In this method, the velocity and pressure are decomposed as the sum of an inviscid flow and viscous perturbation. The inviscid part is solved over the whole domain using a BEM based on cubic spline element. These inviscid results are then used to force a near-field perturbation solution on a smaller domain size, which is solved with a NS model based on LBM-LES, and implemented on GPUs. The BEM solution for large grids is greatly accelerated by using a parallelized FMM, which is efficiently implemented on large and small clusters, yielding an almost linear scaling with the number of unknowns. A new representation of corners and edges is implemented, which improves the global accuracy of the BEM solver, particularly for moving boundaries. We present model results and the recent improvements of the BEM, alongside results of the hybrid model, for applications to problems. Office of Naval Research Grants N000141310687 and N000141612970.
A prototype system based on visual interactive SDM called VGC
NASA Astrophysics Data System (ADS)
Jia, Zelu; Liu, Yaolin; Liu, Yanfang
2009-10-01
In many application domains, data is collected and referenced by its geo-spatial location. Spatial data mining, or the discovery of interesting patterns in such databases, is an important capability in the development of database systems. Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. For spatial data mining of large data sets to be effective, it is also important to include humans in the data exploration process and combine their flexibility, creativity, and general knowledge with the enormous storage capacity and computational power of today's computers. Visual spatial data mining applies human visual perception to the exploration of large data sets. Presenting data in an interactive, graphical form often fosters new insights, encouraging the information and validation of new hypotheses to the end of better problem-solving and gaining deeper domain knowledge. In this paper a visual interactive spatial data mining prototype system (visual geo-classify) based on VC++6.0 and MapObject2.0 are designed and developed, the basic algorithms of the spatial data mining is used decision tree and Bayesian networks, and data classify are used training and learning and the integration of the two to realize. The result indicates it's a practical and extensible visual interactive spatial data mining tool.
An extension of the finite cell method using boolean operations
NASA Astrophysics Data System (ADS)
Abedian, Alireza; Düster, Alexander
2017-05-01
In the finite cell method, the fictitious domain approach is combined with high-order finite elements. The geometry of the problem is taken into account by integrating the finite cell formulation over the physical domain to obtain the corresponding stiffness matrix and load vector. In this contribution, an extension of the FCM is presented wherein both the physical and fictitious domain of an element are simultaneously evaluated during the integration. In the proposed extension of the finite cell method, the contribution of the stiffness matrix over the fictitious domain is subtracted from the cell, resulting in the desired stiffness matrix which reflects the contribution of the physical domain only. This method results in an exponential rate of convergence for porous domain problems with a smooth solution and accurate integration. In addition, it reduces the computational cost, especially when applying adaptive integration schemes based on the quadtree/octree. Based on 2D and 3D problems of linear elastostatics, numerical examples serve to demonstrate the efficiency and accuracy of the proposed method.
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
Third CLIPS Conference Proceedings, volume 2
NASA Technical Reports Server (NTRS)
Riley, Gary (Editor)
1994-01-01
Expert systems are computer programs which emulate human expertise in well defined problem domains. The C Language Integrated Production System (CLIPS) is an expert system building tool, developed at the Johnson Space Center, which provides a complete environment for the development and delivery of rule and/or object based expert systems. CLIPS was specifically designed to provide a low cost option for developing and deploying expert system applications across a wide range of hardware platforms. The development of CLIPS has helped to improve the ability to deliver expert system technology throughout the public and private sectors for a wide range of applications and diverse computing environments. The Third Conference on CLIPS provided a forum for CLIPS users to present and discuss papers relating to CLIPS applications, uses, and extensions.
Segmentation of medical images using explicit anatomical knowledge
NASA Astrophysics Data System (ADS)
Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee
1999-07-01
Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
GPU accelerated FDTD solver and its application in MRI.
Chi, J; Liu, F; Jin, J; Mason, D G; Crozier, S
2010-01-01
The finite difference time domain (FDTD) method is a popular technique for computational electromagnetics (CEM). The large computational power often required, however, has been a limiting factor for its applications. In this paper, we will present a graphics processing unit (GPU)-based parallel FDTD solver and its successful application to the investigation of a novel B1 shimming scheme for high-field magnetic resonance imaging (MRI). The optimized shimming scheme exhibits considerably improved transmit B(1) profiles. The GPU implementation dramatically shortened the runtime of FDTD simulation of electromagnetic field compared with its CPU counterpart. The acceleration in runtime has made such investigation possible, and will pave the way for other studies of large-scale computational electromagnetic problems in modern MRI which were previously impractical.
A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains
NASA Technical Reports Server (NTRS)
Kandil, Osama A.
1998-01-01
Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.
Optimization of Ferroelectric Ceramics by Design at the Microstructure Level
NASA Astrophysics Data System (ADS)
Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.
2010-05-01
Ferroelectric materials show remarkable physical behaviors that make them essential for many devices and have been extensively studied for their applications of nonvolatile random access memory (NvRAM) and high-speed random access memories. Although ferroelectric ceramics (polycrystals) present ease in manufacture and in compositional modifications and represent the widest application area of materials, computational and theoretical studies are sparse owing to many reasons including the large number of constituent atoms. Macroscopic properties of ferroelectric polycrystals are dominated by the inhomogeneities at the crystallographic domain/grain level. Orientation of grains/domains is critical to the electromechanical response of the single crystalline and polycrystalline materials. Polycrystalline materials have the potential of exhibiting better performance at a macroscopic scale by design of the domain/grain configuration at the domain-size scale. This suggests that piezoelectric properties can be optimized by a proper choice of the parameters which control the distribution of grain orientations. Nevertheless, this choice is complicated and it is impossible to analyze all possible combinations of the distribution parameters or the angles themselves. Hence we have implemented the stochastic optimization technique of simulated annealing combined with the homogenization for the optimization problem. The mathematical homogenization theory of a piezoelectric medium is implemented in the finite element method (FEM) by solving the coupled equilibrium electrical and mechanical fields. This implementation enables the study of the dependence of the macroscopic electromechanical properties of a typical crystalline and polycrystalline ferroelectric ceramic on the grain orientation.
ERIC Educational Resources Information Center
Glenn, Margaret K.; Diaz, Sebastian R.; Hawley, Carolyn
2009-01-01
Professionals in the field of addictions view problems associated with recovery management across multiple domains. This exploratory study utilized concept mapping and pattern matching methodology to conceptualize the resulting 7 domains of concern for treatment and aftercare of problem and pathological gamblers. The information can be used by…
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
Apps for Hearing Science and Care.
Paglialonga, Alessia; Tognola, Gabriella; Pinciroli, Francesco
2015-09-01
Our research aims at the identification and assessment of applications (referred to as apps) in the hearing health care domain. This research forum article presents an overview of the current availability, affordability, and variety of hearing-related apps. The available apps were reviewed by searching on the leading platforms (iOS, Android, Windows Phone stores) using the keywords hearing, audiology, audio, auditory, speech, language, tinnitus, hearing loss, hearing aid, hearing sys tem, cochlear implant, implantable device, auditory training, hearing rehabilitation, and assistive technology/tool/device. O n the bas is of the offered services, apps were classified into 4 application domains: (a) screening and assessment, (b) intervention and rehabilitation, (c) education and information, and (d) assistive tools. A large variety of apps are available in the hearing health care domain. These cover a wide range of services for people with hearing or communication problems as well as for hearing professionals, families, or informal caregivers. This evolution can potentially bring along considerable advantages and improved outcomes in the field of hearing health care. Nevertheless, potential risks and threats (e.g., safety, quality, effectiveness, privacy, and regulation) should not be overlooked. Significant research—particularly in terms of assessment and guidance—is still needed for the informed, aware, and safe adoption of hearing-related apps by patients and professionals.
Growth control of genetically modified cells using an antibody/c-Kit chimera.
Kaneko, Etsuji; Kawahara, Masahiro; Ueda, Hiroshi; Nagamune, Teruyuki
2012-05-01
Gene therapy has been regarded as an innovative potential treatment against serious congenital diseases. However, applications of gene therapy remain limited, partly because its clinical success depends on therapeutic gene-transduced cells acquiring a proliferative advantage. To address this problem, we have developed the antigen-mediated genetically modified cell amplification (AMEGA) system, which uses chimeric receptors to enable the selective proliferation of gene-transduced cells. In this report, we describe mimicry of c-Kit signaling and its application to the AMEGA system. We created an antibody/c-Kit chimera in which the extracellular domain of c-Kit is replaced with an anti-fluorescein single-chain Fv antibody fragment and the extracellular D2 domain of the erythropoietin receptor. A genetically modified mouse pro-B cell line carrying this chimera showed selective expansion in the presence of fluorescein-conjugated BSA (BSA-FL) as a growth inducer. By further engineering the transmembrane domain of the chimera to reduce interchain interaction we attained stricter ligand-dependency. Since c-Kit is an important molecule in the expansion of hematopoietic stem cells (HSCs), this antibody/c-Kit chimera could be a promising tool for gene therapy targeting HSCs. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Cross-domain latent space projection for person re-identification
NASA Astrophysics Data System (ADS)
Pu, Nan; Wu, Song; Qian, Li; Xiao, Guoqiang
2018-04-01
In this paper, we research the problem of person re-identification and propose a cross-domain latent space projection (CDLSP) method to address the problems of the absence or insufficient labeled data in the target domain. Under the assumption that the visual features in the source domain and target domain share the similar geometric structure, we transform the visual features from source domain and target domain to a common latent space by optimizing the object function defined in the manifold alignment method. Moreover, the proposed object function takes into account the specific knowledge in the re-id with the aim to improve the performance of re-id under complex situations. Extensive experiments conducted on four benchmark datasets show the proposed CDLSP outperforms or is competitive with stateof- the-art methods for person re-identification.
Advancing MODFLOW Applying the Derived Vector Space Method
NASA Astrophysics Data System (ADS)
Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.
2015-12-01
The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)
Network representations of angular regions for electromagnetic scattering
2017-01-01
Network modeling in electromagnetics is an effective technique in treating scattering problems by canonical and complex structures. Geometries constituted of angular regions (wedges) together with planar layers can now be approached with the Generalized Wiener-Hopf Technique supported by network representation in spectral domain. Even if the network representations in spectral planes are of great importance by themselves, the aim of this paper is to present a theoretical base and a general procedure for the formulation of complex scattering problems using network representation for the Generalized Wiener Hopf Technique starting basically from the wave equation. In particular while the spectral network representations are relatively well known for planar layers, the network modelling for an angular region requires a new theory that will be developed in this paper. With this theory we complete the formulation of a network methodology whose effectiveness is demonstrated by the application to a complex scattering problem with practical solutions given in terms of GTD/UTD diffraction coefficients and total far fields for engineering applications. The methodology can be applied to other physics fields. PMID:28817573
Time-Domain Evaluation of Fractional Order Controllers’ Direct Discretization Methods
NASA Astrophysics Data System (ADS)
Ma, Chengbin; Hori, Yoichi
Fractional Order Control (FOC), in which the controlled systems and/or controllers are described by fractional order differential equations, has been applied to various control problems. Though it is not difficult to understand FOC’s theoretical superiority, realization issue keeps being somewhat problematic. Since the fractional order systems have an infinite dimension, proper approximation by finite difference equation is needed to realize the designed fractional order controllers. In this paper, the existing direct discretization methods are evaluated by their convergences and time-domain comparison with the baseline case. Proposed sampling time scaling property is used to calculate the baseline case with full memory length. This novel discretization method is based on the classical trapezoidal rule but with scaled sampling time. Comparative studies show good performance and simple algorithm make the Short Memory Principle method most practically superior. The FOC research is still at its primary stage. But its applications in modeling and robustness against non-linearities reveal the promising aspects. Parallel to the development of FOC theories, applying FOC to various control problems is also crucially important and one of top priority issues.
Automation in the graphic arts
NASA Astrophysics Data System (ADS)
Truszkowski, Walt
1995-04-01
The CHIMES (Computer-Human Interaction Models) tool was designed to help solve a simply-stated but important problem, i.e., the problem of generating a user interface to a system that complies with established human factors standards and guidelines. Though designed for use in a fairly restricted user domain, i.e., spacecraft mission operations, the CHIMES system is essentially domain independent and applicable wherever graphical user interfaces of displays are to be encountered. The CHIMES philosophy and operating strategy are quite simple. Instead of requiring a human designer to actively maintain in his or her head the now encyclopedic knowledge that human factors and user interface specialists have evolved, CHIMES incorporates this information in its knowledge bases. When directed to evaluated a design, CHIMES determines and accesses the appropriate knowledge, performs an evaluation of the design against that information, determines whether the design is compliant with the selected guidelines and suggests corrective actions if deviations from guidelines are discovered. This paper will provide an overview of the capabilities of the current CHIMES tool and discuss the potential integration of CHIMES-like technology in automated graphic arts systems.
Liao, Yu-Kai; Tseng, Sheng-Hao
2014-01-01
Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828
Distributed Evaluation Functions for Fault Tolerant Multi-Rover Systems
NASA Technical Reports Server (NTRS)
Agogino, Adrian; Turner, Kagan
2005-01-01
The ability to evolve fault tolerant control strategies for large collections of agents is critical to the successful application of evolutionary strategies to domains where failures are common. Furthermore, while evolutionary algorithms have been highly successful in discovering single-agent control strategies, extending such algorithms to multiagent domains has proven to be difficult. In this paper we present a method for shaping evaluation functions for agents that provide control strategies that both are tolerant to different types of failures and lead to coordinated behavior in a multi-agent setting. This method neither relies of a centralized strategy (susceptible to single point of failures) nor a distributed strategy where each agent uses a system wide evaluation function (severe credit assignment problem). In a multi-rover problem, we show that agents using our agent-specific evaluation perform up to 500% better than agents using the system evaluation. In addition we show that agents are still able to maintain a high level of performance when up to 60% of the agents fail due to actuator, communication or controller faults.
Discovering objects in a blood recipient information system.
Qiu, D; Junghans, G; Marquardt, K; Kroll, H; Mueller-Eckhardt, C; Dudeck, J
1995-01-01
Application of object-oriented (OO) methodologies has been generally considered as a solution to the problem of improving the software development process and managing the so-called software crisis. Among them, object-oriented analysis (OOA) is the most essential and is a vital prerequisite for the successful use of other OO methodologies. Though there are already a good deal of OOA methods published, the most important aspect common to all these methods: discovering objects classes truly relevant to the given problem domain, has remained a subject to be intensively researched. In this paper, using the successful development of a blood recipient information system as an example, we present our approach which is based on the conceptual framework of responsibility-driven OOA. In the discussion, we also suggest that it may be inadequate to simply attribute the software crisis to the waterfall model of the software development life-cycle. We are convinced that the real causes for the failure of some software and information systems should be sought in the methodologies used in some crucial phases of the software development process. Furthermore, a software system can also fail if object classes essential to the problem domain are not discovered, implemented and visualized, so that the real-world situation cannot be faithfully traced by it.
Wireless Sensor Networks - Node Localization for Various Industry Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derr, Kurt; Manic, Milos
Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less
Wireless Sensor Networks - Node Localization for Various Industry Problems
Derr, Kurt; Manic, Milos
2015-06-01
Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
NASA Astrophysics Data System (ADS)
Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.
2017-05-01
We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.
Multi-Objective Scheduling for the Cluster II Constellation
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Giuliano, Mark
2011-01-01
This paper describes the application of the MUSE multiobjecctive scheduling framework to the Cluster II WBD scheduling domain. Cluster II is an ESA four-spacecraft constellation designed to study the plasma environment of the Earth and it's magnetosphere. One of the instruments on each of the four spacecraft is the Wide Band Data (WBD) plasma wave experiment. We have applied the MUSE evolutionary algorithm to the scheduling problem represented by this instrument, and the result has been adopted and utilized by the WBD schedulers for nearly a year. This paper describes the WBD scheduling problem, its representation in MUSE, and some of the visualization elements that provide insight into objective value tradeoffs.
Big data processing in the cloud - Challenges and platforms
NASA Astrophysics Data System (ADS)
Zhelev, Svetoslav; Rozeva, Anna
2017-12-01
Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.
Research perspectives in the field of ground penetrating radars in Armenia
NASA Astrophysics Data System (ADS)
Baghdasaryan, Hovik; Knyazyan, Tamara; Hovhannisyan, Tamara
2014-05-01
Armenia is a country located in a very complicated region from geophysical point of view. It is situated on a cross of several tectonic plates and a lot of dormant volcanoes. The main danger is earthquakes and the last big disaster was in 1988 in the northwest part of contemporary Armenia. As a consequence, the main direction of geophysical research is directed towards monitoring and data analysis of seismic activity. National Academy of Sciences of Armenia is conducting these activities in the Institute of Geological Sciences and in the Institute of Geophysics and Engineering Seismology. Research in the field of ground penetrating radars is considered in Armenia as an advanced and perspective complement to the already exploiting research tools. The previous achievements of Armenia in the fields of radiophysics, antenna measurements, laser physics and existing relevant research would permit to initiate new promising area of research in the direction of theory and experiments of ground penetrating radars. One of the key problems in the operation of ground penetrating radars is correct analysis of peculiarities of electromagnetic wave interaction with different layers of the earth. For this, the well-known methods of electromagnetic boundary problem solutions are applied. In addition to the existing methods our research group of Fiber Optics Communication Laboratory at the State Engineering University of Armenia declares its interest in exploring the possibilities of new non-traditional method of boundary problems solution for electromagnetic wave interaction with the ground. This new method for solving boundary problems of electrodynamics is called the method of single expression (MSE) [1-3]. The distinctive feature of this method is denial from the presentation of wave equation's solution in the form of counter-propagating waves, i.e. denial from the superposition principal application. This permits to solve linear and nonlinear (field intensity-dependent) problems with the same exactness, without any approximations. It is favourable also since in solution of boundary problems in the MSE there is no necessity in applying absorbing boundary conditions at the model edges by terminating the computational domain. In the MSE the computational process starts from the rear side of any multilayer structure that ensures the uniqueness of problem solution without application of any artificial absorbing boundary conditions. Previous success of the MSE application in optical domain gives us confidence in successful extension of this method's use for solution of different problems related to electromagnetic wave interaction with the layers of the earth and buried objects in the ground. This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." 1. H.V. Baghdasaryan, T.M. Knyazyan, 'Problem of Plane EM Wave Self-action in Multilayer Structure: an Exact Solution', Optical and Quantum Electronics, vol. 31, 1999, pp.1059-1072. 2. H.V. Baghdasaryan, T.M. Knyazyan, 'Modelling of strongly nonlinear sinusoidal Bragg gratings by the Method of Single Expression', Optical and Quantum Electronics, vol. 32, 2000, pp. 869-883. 3. H.V. Baghdasaryan, 'Basics of the Method of Single Expression: New Approach for Solving Boundary Problems in Classical Electrodynamics', Yerevan, Chartaraget, 2013.
A Testbed to Evaluate the FIWARE-Based IoT Platform in the Domain of Precision Agriculture.
Martínez, Ramón; Pastor, Juan Ángel; Álvarez, Bárbara; Iborra, Andrés
2016-11-23
Wireless sensor networks (WSNs) represent one of the most promising technologies for precision farming. Over the next few years, a significant increase in the use of such systems on commercial farms is expected. WSNs present a number of problems, regarding scalability, interoperability, communications, connectivity with databases and data processing. Different Internet of Things middleware is appearing to overcome these challenges. This paper checks whether one of these middleware, FIWARE, is suitable for the development of agricultural applications. To the authors' knowledge, there are no works that show how to use FIWARE in precision agriculture and study its appropriateness, its scalability and its efficiency for this kind of applications. To do this, a testbed has been designed and implemented to simulate different deployments and load conditions. The testbed is a typical FIWARE application, complete, yet simple and comprehensible enough to show the main features and components of FIWARE, as well as the complexity of using this technology. Although the testbed has been deployed in a laboratory environment, its design is based on the analysis of an Internet of Things use case scenario in the domain of precision agriculture.
A Testbed to Evaluate the FIWARE-Based IoT Platform in the Domain of Precision Agriculture
Martínez, Ramón; Pastor, Juan Ángel; Álvarez, Bárbara; Iborra, Andrés
2016-01-01
Wireless sensor networks (WSNs) represent one of the most promising technologies for precision farming. Over the next few years, a significant increase in the use of such systems on commercial farms is expected. WSNs present a number of problems, regarding scalability, interoperability, communications, connectivity with databases and data processing. Different Internet of Things middleware is appearing to overcome these challenges. This paper checks whether one of these middleware, FIWARE, is suitable for the development of agricultural applications. To the authors’ knowledge, there are no works that show how to use FIWARE in precision agriculture and study its appropriateness, its scalability and its efficiency for this kind of applications. To do this, a testbed has been designed and implemented to simulate different deployments and load conditions. The testbed is a typical FIWARE application, complete, yet simple and comprehensible enough to show the main features and components of FIWARE, as well as the complexity of using this technology. Although the testbed has been deployed in a laboratory environment, its design is based on the analysis of an Internet of Things use case scenario in the domain of precision agriculture. PMID:27886091
An Approach to Quad Meshing Based On Cross Valued Maps and the Ginzburg-Landau Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viertel, Ryan; Osting, Braxton
2017-08-01
A generalization of vector fields, referred to as N-direction fields or cross fields when N=4, has been recently introduced and studied for geometry processing, with applications in quadrilateral (quad) meshing, texture mapping, and parameterization. We make the observation that cross field design for two-dimensional quad meshing is related to the well-known Ginzburg-Landau problem from mathematical physics. This identification yields a variety of theoretical tools for efficiently computing boundary-aligned quad meshes, with provable guarantees on the resulting mesh, for example, the number of mesh defects and bounds on the defect locations. The procedure for generating the quad mesh is to (i)more » find a complex-valued "representation" field that minimizes the Dirichlet energy subject to a boundary constraint, (ii) convert the representation field into a boundary-aligned, smooth cross field, (iii) use separatrices of the cross field to partition the domain into four sided regions, and (iv) mesh each of these four-sided regions using standard techniques. Under certain assumptions on the geometry of the domain, we prove that this procedure can be used to produce a cross field whose separatrices partition the domain into four sided regions. To solve the energy minimization problem for the representation field, we use an extension of the Merriman-Bence-Osher (MBO) threshold dynamics method, originally conceived as an algorithm to simulate motion by mean curvature, to minimize the Ginzburg-Landau energy for the optimal representation field. Lastly, we demonstrate the method on a variety of test domains.« less
Time-Domain Impedance Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Auriault, Laurent
1996-01-01
It is an accepted practice in aeroacoustics to characterize the properties of an acoustically treated surface by a quantity known as impedance. Impedance is a complex quantity. As such, it is designed primarily for frequency-domain analysis. Time-domain boundary conditions that are the equivalent of the frequency-domain impedance boundary condition are proposed. Both single frequency and model broadband time-domain impedance boundary conditions are provided. It is shown that the proposed boundary conditions, together with the linearized Euler equations, form well-posed initial boundary value problems. Unlike ill-posed problems, they are free from spurious instabilities that would render time-marching computational solutions impossible.
NASA Technical Reports Server (NTRS)
Hayes-Roth, Frederick; Erman, Lee D.; Terry, Allan; Hayes-Roth, Barbara
1992-01-01
We have recently begun a 4-year effort to develop a new technology foundation and associated methodology for the rapid development of high-performance intelligent controllers. Our objective in this work is to enable system developers to create effective real-time systems for control of multiple, coordinated entities in much less time than is currently required. Our technical strategy for achieving this objective is like that in other domain-specific software efforts: analyze the domain and task underlying effective performance, construct parametric or model-based generic components and overall solutions to the task, and provide excellent means for specifying, selecting, tailoring or automatically generating the solution elements particularly appropriate for the problem at hand. In this paper, we first present our specific domain focus, briefly describe the methodology and environment we are developing to provide a more regular approach to software development, and then later describe the issues this raises for the research community and this specific workshop.
Primal-mixed formulations for reaction-diffusion systems on deforming domains
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo
2015-10-01
We propose a finite element formulation for a coupled elasticity-reaction-diffusion system written in a fully Lagrangian form and governing the spatio-temporal interaction of species inside an elastic, or hyper-elastic body. A primal weak formulation is the baseline model for the reaction-diffusion system written in the deformed domain, and a finite element method with piecewise linear approximations is employed for its spatial discretization. On the other hand, the strain is introduced as mixed variable in the equations of elastodynamics, which in turn acts as coupling field needed to update the diffusion tensor of the modified reaction-diffusion system written in a deformed domain. The discrete mechanical problem yields a mixed finite element scheme based on row-wise Raviart-Thomas elements for stresses, Brezzi-Douglas-Marini elements for displacements, and piecewise constant pressure approximations. The application of the present framework in the study of several coupled biological systems on deforming geometries in two and three spatial dimensions is discussed, and some illustrative examples are provided and extensively analyzed.
Crowdsourcing in biomedicine: challenges and opportunities.
Khare, Ritu; Good, Benjamin M; Leaman, Robert; Su, Andrew I; Lu, Zhiyong
2016-01-01
The use of crowdsourcing to solve important but complex problems in biomedical and clinical sciences is growing and encompasses a wide variety of approaches. The crowd is diverse and includes online marketplace workers, health information seekers, science enthusiasts and domain experts. In this article, we review and highlight recent studies that use crowdsourcing to advance biomedicine. We classify these studies into two broad categories: (i) mining big data generated from a crowd (e.g. search logs) and (ii) active crowdsourcing via specific technical platforms, e.g. labor markets, wikis, scientific games and community challenges. Through describing each study in detail, we demonstrate the applicability of different methods in a variety of domains in biomedical research, including genomics, biocuration and clinical research. Furthermore, we discuss and highlight the strengths and limitations of different crowdsourcing platforms. Finally, we identify important emerging trends, opportunities and remaining challenges for future crowdsourcing research in biomedicine. Published by Oxford University Press 2015. This work is written by US Government employees and is in the public domain in the US.
Broadband impedance boundary conditions for the simulation of sound propagation in the time domain.
Bin, Jonghoon; Yousuff Hussaini, M; Lee, Soogab
2009-02-01
An accurate and practical surface impedance boundary condition in the time domain has been developed for application to broadband-frequency simulation in aeroacoustic problems. To show the capability of this method, two kinds of numerical simulations are performed and compared with the analytical/experimental results: one is acoustic wave reflection by a monopole source over an impedance surface and the other is acoustic wave propagation in a duct with a finite impedance wall. Both single-frequency and broadband-frequency simulations are performed within the framework of linearized Euler equations. A high-order dispersion-relation-preserving finite-difference method and a low-dissipation, low-dispersion Runge-Kutta method are used for spatial discretization and time integration, respectively. The results show excellent agreement with the analytical/experimental results at various frequencies. The method accurately predicts both the amplitude and the phase of acoustic pressure and ensures the well-posedness of the broadband time-domain impedance boundary condition.
NASA Astrophysics Data System (ADS)
Yuan, Qinghong; Song, Guangyao; Sun, Deyan; Ding, Feng
2014-10-01
Grain boundaries (GBs) in graphene prepared by chemical vapor deposition (CVD) greatly degrade the electrical and mechanical properties of graphene and thus hinder the applications of graphene in electronic devices. The seamless stitching of graphene flakes can avoid GBs, wherein the identical orientation of graphene domain is required. In this letter, the graphene orientation on one of the most used catalyst surface -- Cu(100) surface, is explored by density functional theory (DFT) calculations. Our calculation demonstrates that a zigzag edged hexagonal graphene domain on a Cu(100) surface has two equivalent energetically preferred orientations, which are 30 degree away from each other. Therefore, the fusion of graphene domains on Cu(100) surface during CVD growth will inevitably lead to densely distributed GBs in the synthesized graphene. Aiming to solve this problem, a simple route, that applies external strain to break the symmetry of the Cu(100) surface, was proposed and proved efficient.
Blanchard-Fields, Fredda; Mienaltowski, Andrew; Seay, Renee Baldi
2007-01-01
Using the Everyday Problem Solving Inventory of Cornelius and Caspi, we examined differences in problem-solving strategy endorsement and effectiveness in two domains of everyday functioning (instrumental or interpersonal, and a mixture of the two domains) and for four strategies (avoidance-denial, passive dependence, planful problem solving, and cognitive analysis). Consistent with past research, our research showed that older adults were more problem focused than young adults in their approach to solving instrumental problems, whereas older adults selected more avoidant-denial strategies than young adults when solving interpersonal problems. Overall, older adults were also more effective than young adults when solving everyday problems, in particular for interpersonal problems.
Orchestrating Distributed Resource Ensembles for Petascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldin, Ilya; Mandal, Anirban; Ruth, Paul
2014-04-24
Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstractmore » API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.« less
An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Shivakumar, K. N.
1990-01-01
An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.
A New Domain Decomposition Approach for the Gust Response Problem
NASA Technical Reports Server (NTRS)
Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.
2002-01-01
A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.
Adaptive Peircean decision aid project summary assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senglaub, Michael E.
2007-01-01
This efforts objective was to identify and hybridize a suite of technologies enabling the development of predictive decision aids for use principally in combat environments but also in any complex information terrain. The technologies required included formal concept analysis for knowledge representation and information operations, Peircean reasoning to support hypothesis generation, Mill's's canons to begin defining information operators that support the first two technologies and co-evolutionary game theory to provide the environment/domain to assess predictions from the reasoning engines. The intended application domain is the IED problem because of its inherent evolutionary nature. While a fully functioning integrated algorithm wasmore » not achieved the hybridization and demonstration of the technologies was accomplished and demonstration of utility provided for a number of ancillary queries.« less
Extremal equilibria for reaction-diffusion equations in bounded domains and applications
NASA Astrophysics Data System (ADS)
Rodríguez-Bernal, Aníbal; Vidal-López, Alejandro
We show the existence of two special equilibria, the extremal ones, for a wide class of reaction-diffusion equations in bounded domains with several boundary conditions, including non-linear ones. They give bounds for the asymptotic dynamics and so for the attractor. Some results on the existence and/or uniqueness of positive solutions are also obtained. As a consequence, several well-known results on the existence and/or uniqueness of solutions for elliptic equations are revisited in a unified way obtaining, in addition, information on the dynamics of the associated parabolic problem. Finally, we ilustrate the use of the general results by applying them to the case of logistic equations. In fact, we obtain a detailed picture of the positive dynamics depending on the parameters appearing in the equation.
Patrick, Christopher J; Hajcak, Greg
2016-03-01
The National Institute of Mental Health's (NIMH) Research Domain Criteria (RDoC) initiative seeks to establish new dimensional conceptions of mental health problems, through the investigation of clinically relevant "process" constructs that have neurobiological as well as psychological referents. This special issue provides a detailed overview of the RDoC framework by NIMH officials Michael Kozak and Bruce Cuthbert, and spotlights RDoC-oriented investigative efforts by leading psychophysiological research groups as examples of how clinical science might be reshaped through application of RDoC principles. Accompanying commentaries highlight key aspects of the work by each group, and discuss reported methods/findings in relation to promises and challenges of the RDoC initiative more broadly. © 2016 Society for Psychophysiological Research.
Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems
NASA Astrophysics Data System (ADS)
Cianchi, Andrea; Maz'ya, Vladimir G.
2018-05-01
Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
NASA Astrophysics Data System (ADS)
Pskhu, A. V.
2017-12-01
We solve the first boundary-value problem in a non-cylindrical domain for a diffusion-wave equation with the Dzhrbashyan- Nersesyan operator of fractional differentiation with respect to the time variable. We prove an existence and uniqueness theorem for this problem, and construct a representation of the solution. We show that a sufficient condition for unique solubility is the condition of Hölder smoothness for the lateral boundary of the domain. The corresponding results for equations with Riemann- Liouville and Caputo derivatives are particular cases of results obtained here.
Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning
ERIC Educational Resources Information Center
Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan
2009-01-01
In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…
Convexity of Energy-Like Functions: Theoretical Results and Applications to Power System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dvijotham, Krishnamurthy; Low, Steven; Chertkov, Michael
2015-01-12
Power systems are undergoing unprecedented transformations with increased adoption of renewables and distributed generation, as well as the adoption of demand response programs. All of these changes, while making the grid more responsive and potentially more efficient, pose significant challenges for power systems operators. Conventional operational paradigms are no longer sufficient as the power system may no longer have big dispatchable generators with sufficient positive and negative reserves. This increases the need for tools and algorithms that can efficiently predict safe regions of operation of the power system. In this paper, we study energy functions as a tool to designmore » algorithms for various operational problems in power systems. These have a long history in power systems and have been primarily applied to transient stability problems. In this paper, we take a new look at power systems, focusing on an aspect that has previously received little attention: Convexity. We characterize the domain of voltage magnitudes and phases within which the energy function is convex in these variables. We show that this corresponds naturally with standard operational constraints imposed in power systems. We show that power of equations can be solved using this approach, as long as the solution lies within the convexity domain. We outline various desirable properties of solutions in the convexity domain and present simple numerical illustrations supporting our results.« less
Portugal, Flávia Batista; Campos, Mônica Rodrigues; Gonçalves, Daniel Almeida; Mari, Jair de Jesus; Fortes, Sandra Lúcia Correia Lima
2016-02-01
Quality of life (QoL) is a subjective construct, which can be negatively associated with factors such as mental disorders and stressful life events (SLEs). This article seeks to identify the association between socioeconomic and demographic variables, common mental disorders, symptoms suggestive of depression and anxiety, SLEs with QoL in patients attended in Primary Care (PC). It is a transversal study, conducted with 1,466 patients attended in PC centers in the cities of São Paulo and Rio de Janeiro in 2009 and 2010. Bivariate analysis was performed using the T-test and four multiple linear regressions for each QoL domain. The scores for the physical, psychological, social relations and environment domains were, respectively, 64.7; 64.2; 68.5 and 49.1. By means of multivariate analysis, associations of the physical domain were found with health problems and discrimination; of the psychological domain with discrimination; of social relations with financial/structural problems; of external causes and health problems; and of the environment with financial/structural problems, external causes and discrimination. Mental health variables, health problems and financial/structural problems were the factors negatively associated with QoL.
NASA Astrophysics Data System (ADS)
Fallahi, Arya; Oswald, Benedikt; Leidenberger, Patrick
2012-04-01
We study a 3-dimensional, dual-field, fully explicit method for the solution of Maxwell's equations in the time domain on unstructured, tetrahedral grids. The algorithm uses the element level time domain (ELTD) discretization of the electric and magnetic vector wave equations. In particular, the suitability of the method for the numerical analysis of nanometer structured systems in the optical region of the electromagnetic spectrum is investigated. The details of the theory and its implementation as a computer code are introduced and its convergence behavior as well as conditions for stable time domain integration is examined. Here, we restrict ourselves to non-dispersive dielectric material properties since dielectric dispersion will be treated in a subsequent paper. Analytically solvable problems are analyzed in order to benchmark the method. Eventually, a dielectric microlens is considered to demonstrate the potential of the method. A flexible method of 2nd order accuracy is obtained that is applicable to a wide range of nano-optical configurations and can be a serious competitor to more conventional finite difference time domain schemes which operate only on hexahedral grids. The ELTD scheme can resolve geometries with a wide span of characteristic length scales and with the appropriate level of detail, using small tetrahedra where delicate, physically relevant details must be modeled.
Toward More Accurate Iris Recognition Using Cross-Spectral Matching.
Nalla, Pattabhi Ramaiah; Kumar, Ajay
2017-01-01
Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.
Topology optimisation for natural convection problems
NASA Astrophysics Data System (ADS)
Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole
2014-12-01
This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.
A review of estimation of distribution algorithms in bioinformatics
Armañanzas, Rubén; Inza, Iñaki; Santana, Roberto; Saeys, Yvan; Flores, Jose Luis; Lozano, Jose Antonio; Peer, Yves Van de; Blanco, Rosa; Robles, Víctor; Bielza, Concha; Larrañaga, Pedro
2008-01-01
Evolutionary search algorithms have become an essential asset in the algorithmic toolbox for solving high-dimensional optimization problems in across a broad range of bioinformatics problems. Genetic algorithms, the most well-known and representative evolutionary search technique, have been the subject of the major part of such applications. Estimation of distribution algorithms (EDAs) offer a novel evolutionary paradigm that constitutes a natural and attractive alternative to genetic algorithms. They make use of a probabilistic model, learnt from the promising solutions, to guide the search process. In this paper, we set out a basic taxonomy of EDA techniques, underlining the nature and complexity of the probabilistic model of each EDA variant. We review a set of innovative works that make use of EDA techniques to solve challenging bioinformatics problems, emphasizing the EDA paradigm's potential for further research in this domain. PMID:18822112
Caplan, M; Weissberg, R P; Grober, J S; Sivo, P J; Grady, K; Jacoby, C
1992-02-01
This study assessed the impact of school-based social competence training on skills, social adjustment, and self-reported substance use of 282 sixth and seventh graders. Training emphasized broad-based competence promotion in conjunction with domain-specific application to substance abuse prevention. The 20-session program comprised six units: stress management, self-esteem, problem solving, substances and health information, assertiveness, and social networks. Findings indicated positive training effects on Ss' skills in handling interpersonal problems and coping with anxiety. Teacher ratings revealed improvements in Ss' constructive conflict resolution with peers, impulse control, and popularity. Self-report ratings indicated gains in problem-solving efficacy. Results suggest some preventive impact on self-reported substance use intentions and excessive alcohol use. In general, the program was found to be beneficial for both inner-city and suburban students.
Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.
Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf
2018-05-12
We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.
The shifting zoom: new possibilities for inverse scattering on electrically large domains
NASA Astrophysics Data System (ADS)
Persico, Raffaele; Ludeno, Giovanni; Soldovieri, Francesco; De Coster, Alberic; Lambot, Sebastien
2017-04-01
Inverse scattering is a subject of great interest in diagnostic problems, which are in their turn of interest for many applicative problems as investigation of cultural heritage, characterization of foundations or subservices, identification of unexploded ordnances and so on [1-4]. In particular, GPR data are usually focused by means of migration algorithms, essentially based on a linear approximation of the scattering phenomenon. Migration algorithms are popular because they are computationally efficient and do not require the inversion of a matrix, neither the calculation of the elements of a matrix. In fact, they are essentially based on the adjoint of the linearised scattering operator, which allows in the end to write the inversion formula as a suitably weighted integral of the data [5]. In particular, this makes a migration algorithm more suitable than a linear microwave tomography inversion algorithm for the reconstruction of an electrically large investigation domain. However, this computational challenge can be overcome by making use of investigation domains joined side by side, as proposed e.g. in ref. [3]. This allows to apply a microwave tomography algorithm even to large investigation domains. However, the joining side by side of sequential investigation domains introduces a problem of limited (and asymmetric) maximum view angle with regard to the targets occurring close to the edges between two adjacent domains, or possibly crossing these edges. The shifting zoom is a method that allows to overcome this difficulty by means of overlapped investigation and observation domains [6-7]. It requires more sequential inversion with respect to adjacent investigation domains, but the really required extra-time is minimal because the matrix to be inverted is calculated ones and for all, as well as its singular value decomposition: what is repeated more time is only a fast matrix-vector multiplication. References [1] M. Pieraccini, L. Noferini, D. Mecatti, C. Atzeni, R. Persico, F. Soldovieri, Advanced Processing Techniques for Step-frequency Continuous-Wave Penetrating Radar: the Case Study of "Palazzo Vecchio" Walls (Firenze, Italy), Research on Nondestructive Evaluation, vol. 17, pp. 71-83, 2006. [2] N. Masini, R. Persico, E. Rizzo, A. Calia, M. T. Giannotta, G. Quarta, A. Pagliuca, "Integrated Techniques for Analysis and Monitoring of Historical Monuments: the case of S.Giovanni al Sepolcro in Brindisi (Southern Italy)." Near Surface Geophysics, vol. 8 (5), pp. 423-432, 2010. [3] E. Pettinelli, A. Di Matteo, E. Mattei, L. Crocco, F. Soldovieri, J. D. Redman, and A. P. Annan, "GPR response from buried pipes: Measurement on field site and tomographic reconstructions", IEEE Transactions on Geoscience and Remote Sensing, vol. 47, n. 8, 2639-2645, Aug. 2009. [4] O. Lopera, E. C. Slob, N. Milisavljevic and S. Lambot, "Filtering soil surface and antenna effects from GPR data to enhance landmine detection", IEEE Transactions on Geoscience and Remote Sensing, vol. 45, n. 3, pp.707-717, 2007. [5] R. Persico, "Introduction to Ground Penetrating Radar: Inverse Scattering and Data Processing". Wiley, 2014. [6] R. Persico, J. Sala, "The problem of the investigation domain subdivision in 2D linear inversions for large scale GPR data", IEEE Geoscience and Remote Sensing Letters, vol. 11, n. 7, pp. 1215-1219, doi 10.1109/LGRS.2013.2290008, July 2014. [7] R. Persico, F. Soldovieri, S. Lambot, Shifting zoom in 2D linear inversions performed on GPR data gathered along an electrically large investigation domain, Proc. 16th International Conference on Ground Penetrating Radar GPR2016, Honk-Kong, June 13-16, 2016
Friedel, Michael J.
2001-01-01
This report describes a model for simulating transient, Variably Saturated, coupled water-heatsolute Transport in heterogeneous, anisotropic, 2-Dimensional, ground-water systems with variable fluid density (VST2D). VST2D was developed to help understand the effects of natural and anthropogenic factors on quantity and quality of variably saturated ground-water systems. The model solves simultaneously for one or more dependent variables (pressure, temperature, and concentration) at nodes in a horizontal or vertical mesh using a quasi-linearized general minimum residual method. This approach enhances computational speed beyond the speed of a sequential approach. Heterogeneous and anisotropic conditions are implemented locally using individual element property descriptions. This implementation allows local principal directions to differ among elements and from the global solution domain coordinates. Boundary conditions can include time-varying pressure head (or moisture content), heat, and/or concentration; fluxes distributed along domain boundaries and/or at internal node points; and/or convective moisture, heat, and solute fluxes along the domain boundaries; and/or unit hydraulic gradient along domain boundaries. Other model features include temperature and concentration dependent density (liquid and vapor) and viscosity, sorption and/or decay of a solute, and capability to determine moisture content beyond residual to zero. These features are described in the documentation together with development of the governing equations, application of the finite-element formulation (using the Galerkin approach), solution procedure, mass and energy balance considerations, input requirements, and output options. The VST2D model was verified, and results included solutions for problems of water transport under isohaline and isothermal conditions, heat transport under isobaric and isohaline conditions, solute transport under isobaric and isothermal conditions, and coupled water-heat-solute transport. The first three problems considered in model verification were compared to either analytical or numerical solutions, whereas the coupled problem was compared to measured laboratory results for which no known analytic solutions or numerical models are available. The test results indicate the model is accurate and applicable for a wide range of conditions, including when water (liquid and vapor), heat (sensible and latent), and solute are coupled in ground-water systems. The cumulative residual errors for the coupled problem tested was less than 10-8 cubic centimeter per cubic centimeter, 10-5 moles per kilogram, and 102 calories per cubic meter for liquid water content, solute concentration and heat content, respectively. This model should be useful to hydrologists, engineers, and researchers interested in studying coupled processes associated with variably saturated transport in ground-water systems.
MARS-MD: rejection based image domain material decomposition
NASA Astrophysics Data System (ADS)
Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.
2018-05-01
This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.
Human Factors Research for Space Exploration: Measurement, Modeling, and Mitigation
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Allen, Christopher S.; Barshi, Immanuel; Billman, Dorrit; Holden, Kritina L.
2010-01-01
As part of NASA's Human Research Program, the Space Human Factors Engineering Project serves as the bridge between Human Factors research and Human Spaceflight applications. Our goal is to be responsive to the operational community while addressing issues at a sufficient level of abstraction to ensure that our tools and solutions generalize beyond the point design. In this panel, representatives from four of our research domains will discuss the challenges they face in solving current problems while also enabling future capabilities.