Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
ERIC Educational Resources Information Center
Eyring, LeRoy
1980-01-01
Describes methods for using the high-resolution electron microscope in conjunction with other tools to reveal the identity and environment of atoms. Problems discussed include the ultimate structure of real crystalline solids including defect structure and the mechanisms of chemical reactions. (CS)
Ossa-Estrada, Diego Alejandro; Muñoz-Echeverri, Iván Felipe
2017-01-01
The commercial sexual exploitation of children is a public health problem and a serious violation of the rights of children and adolescents. The response to this problem has been affected by the meanings and practices of the actors involved. In order to contribute to a better understanding of the problem, a qualitative social study using a grounded theory approach was carried out between 2014 and 2015. The aim was to understand the meanings and practices regarding this issue of people who spend time in an area of the city center with a high presence of commercial sexual exploitation of children and adolescents. The techniques used were participant observation and semi-structured interviews. We found that the predominate conceptions lead to practices that aggravate and perpetuate rights violations. Although practices of protection towards victims were identified, these were limited to critical aspects of the context. Actions to eradicate commercial sexual exploitation should work with the community and the meanings within the community regarding sexual exploitation so as to potentiate the victim protection practices carried out and reduce barriers to such practices.
Acuña, Daniel E; Parada, Víctor
2010-07-29
Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution ("good" edges) were significantly more likely to stay than other edges ("bad" edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants "ran out of ideas." In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics.
Acuña, Daniel E.; Parada, Víctor
2010-01-01
Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution (“good” edges) were significantly more likely to stay than other edges (“bad” edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants “ran out of ideas.” In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics. PMID:20686597
Parallel-vector solution of large-scale structural analysis problems on supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1989-01-01
A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.
Constraint Logic Programming approach to protein structure prediction.
Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico
2004-11-30
The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
2017-04-12
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William; Laird, Carl; Siirola, John
Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.
Mizutani, Eiji; Demmel, James W
2003-01-01
This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).
Exploiting the wavelet structure in compressed sensing MRI.
Chen, Chen; Huang, Junzhou
2014-12-01
Sparsity has been widely utilized in magnetic resonance imaging (MRI) to reduce k-space sampling. According to structured sparsity theories, fewer measurements are required for tree sparse data than the data only with standard sparsity. Intuitively, more accurate image reconstruction can be achieved with the same number of measurements by exploiting the wavelet tree structure in MRI. A novel algorithm is proposed in this article to reconstruct MR images from undersampled k-space data. In contrast to conventional compressed sensing MRI (CS-MRI) that only relies on the sparsity of MR images in wavelet or gradient domain, we exploit the wavelet tree structure to improve CS-MRI. This tree-based CS-MRI problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved by an iterative scheme. Simulations and in vivo experiments demonstrate the significant improvement of the proposed method compared to conventional CS-MRI algorithms, and the feasibleness on MR data compared to existing tree-based imaging algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
Quantum computation with coherent spin states and the close Hadamard problem
NASA Astrophysics Data System (ADS)
Adcock, Mark R. A.; Høyer, Peter; Sanders, Barry C.
2016-04-01
We study a model of quantum computation based on the continuously parameterized yet finite-dimensional Hilbert space of a spin system. We explore the computational powers of this model by analyzing a pilot problem we refer to as the close Hadamard problem. We prove that the close Hadamard problem can be solved in the spin system model with arbitrarily small error probability in a constant number of oracle queries. We conclude that this model of quantum computation is suitable for solving certain types of problems. The model is effective for problems where symmetries between the structure of the information associated with the problem and the structure of the unitary operators employed in the quantum algorithm can be exploited.
GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.
Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N
2018-01-01
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.
Network exploitation using WAMI tracks
NASA Astrophysics Data System (ADS)
Rimey, Ray; Record, Jim; Keefe, Dan; Kennedy, Levi; Cramer, Chris
2011-06-01
Creating and exploiting network models from wide area motion imagery (WAMI) is an important task for intelligence analysis. Tracks of entities observed moving in the WAMI sensor data are extracted, then large numbers of tracks are studied over long time intervals to determine specific locations that are visited (e.g., buildings in an urban environment), what locations are related to other locations, and the function of each location. This paper describes several parts of the network detection/exploitation problem, and summarizes a solution technique for each: (a) Detecting nodes; (b) Detecting links between known nodes; (c) Node attributes to characterize a node; (d) Link attributes to characterize each link; (e) Link structure inferred from node attributes and vice versa; and (f) Decomposing a detected network into smaller networks. Experimental results are presented for each solution technique, and those are used to discuss issues for each problem part and its solution technique.
Linear decentralized systems with special structure. [for twin lift helicopters
NASA Technical Reports Server (NTRS)
Martin, C. F.
1982-01-01
Certain fundamental structures associated with linear systems having internal symmetries are outlined. It is shown that the theory of finite-dimensional algebras and their representations are closely related to such systems. It is also demonstrated that certain problems in the decentralized control of symmetric systems are equivalent to long-standing problems of linear systems theory. Even though the structure imposed arose in considering the problems of twin-lift helicopters, any large system composed of several identical intercoupled control systems can be modeled by a linear system that satisfies the constraints imposed. Internal symmetry can be exploited to yield new system-theoretic invariants and a better understanding of the way in which the underlying structure affects overall system performance.
Computational structural mechanics methods research using an evolving framework
NASA Technical Reports Server (NTRS)
Knight, N. F., Jr.; Lotts, C. G.; Gillian, R. E.
1990-01-01
Advanced structural analysis and computational methods that exploit high-performance computers are being developed in a computational structural mechanics research activity sponsored by the NASA Langley Research Center. These new methods are developed in an evolving framework and applied to representative complex structural analysis problems from the aerospace industry. An overview of the methods development environment is presented, and methods research areas are described. Selected application studies are also summarized.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Finite element solution of optimal control problems with state-control inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1992-01-01
It is demonstrated that the weak Hamiltonian finite-element formulation is amenable to the solution of optimal control problems with inequality constraints which are functions of both state and control variables. Difficult problems can be treated on account of the ease with which algebraic equations can be generated before having to specify the problem. These equations yield very accurate solutions. Owing to the sparse structure of the resulting Jacobian, computer solutions can be obtained quickly when the sparsity is exploited.
Dynamic programming and graph algorithms in computer vision.
Felzenszwalb, Pedro F; Zabih, Ramin
2011-04-01
Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.
The pseudo-Boolean optimization approach to form the N-version software structure
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.
2015-10-01
The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
R&D 100, 2016: Pyomo 4.0 â Python Optimization Modeling Objects
Hart, William; Laird, Carl; Siirola, John
2018-06-13
Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.
Large-scale structural analysis: The structural analyst, the CSM Testbed and the NAS System
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Mccleary, Susan L.; Macy, Steven C.; Aminpour, Mohammad A.
1989-01-01
The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.
Augmented neural networks and problem structure-based heuristics for the bin-packing problem
NASA Astrophysics Data System (ADS)
Kasap, Nihat; Agarwal, Anurag
2012-08-01
In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.
Dynamic Programming and Graph Algorithms in Computer Vision*
Felzenszwalb, Pedro F.; Zabih, Ramin
2013-01-01
Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950
An efficient dynamic load balancing algorithm
NASA Astrophysics Data System (ADS)
Lagaros, Nikos D.
2014-01-01
In engineering problems, randomness and uncertainties are inherent. Robust design procedures, formulated in the framework of multi-objective optimization, have been proposed in order to take into account sources of randomness and uncertainty. These design procedures require orders of magnitude more computational effort than conventional analysis or optimum design processes since a very large number of finite element analyses is required to be dealt. It is therefore an imperative need to exploit the capabilities of computing resources in order to deal with this kind of problems. In particular, parallel computing can be implemented at the level of metaheuristic optimization, by exploiting the physical parallelization feature of the nondominated sorting evolution strategies method, as well as at the level of repeated structural analyses required for assessing the behavioural constraints and for calculating the objective functions. In this study an efficient dynamic load balancing algorithm for optimum exploitation of available computing resources is proposed and, without loss of generality, is applied for computing the desired Pareto front. In such problems the computation of the complete Pareto front with feasible designs only, constitutes a very challenging task. The proposed algorithm achieves linear speedup factors and almost 100% speedup factor values with reference to the sequential procedure.
Collaborative learning in networks.
Mason, Winter; Watts, Duncan J
2012-01-17
Complex problems in science, business, and engineering typically require some tradeoff between exploitation of known solutions and exploration for novel ones, where, in many cases, information about known solutions can also disseminate among individual problem solvers through formal or informal networks. Prior research on complex problem solving by collectives has found the counterintuitive result that inefficient networks, meaning networks that disseminate information relatively slowly, can perform better than efficient networks for problems that require extended exploration. In this paper, we report on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks. As expected, we found that collective exploration improved average success over independent exploration because good solutions could diffuse through the network. In contrast to prior work, however, we found that efficient networks outperformed inefficient networks, even in a problem space with qualitative properties thought to favor inefficient networks. We explain this result in terms of individual-level explore-exploit decisions, which we find were influenced by the network structure as well as by strategic considerations and the relative payoff between maxima. We conclude by discussing implications for real-world problem solving and possible extensions.
Collaborative learning in networks
Mason, Winter; Watts, Duncan J.
2012-01-01
Complex problems in science, business, and engineering typically require some tradeoff between exploitation of known solutions and exploration for novel ones, where, in many cases, information about known solutions can also disseminate among individual problem solvers through formal or informal networks. Prior research on complex problem solving by collectives has found the counterintuitive result that inefficient networks, meaning networks that disseminate information relatively slowly, can perform better than efficient networks for problems that require extended exploration. In this paper, we report on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks. As expected, we found that collective exploration improved average success over independent exploration because good solutions could diffuse through the network. In contrast to prior work, however, we found that efficient networks outperformed inefficient networks, even in a problem space with qualitative properties thought to favor inefficient networks. We explain this result in terms of individual-level explore-exploit decisions, which we find were influenced by the network structure as well as by strategic considerations and the relative payoff between maxima. We conclude by discussing implications for real-world problem solving and possible extensions. PMID:22184216
A microwave tomography strategy for structural monitoring
NASA Astrophysics Data System (ADS)
Catapano, I.; Crocco, L.; Isernia, T.
2009-04-01
The capability of the electromagnetic waves to penetrate optical dense regions can be conveniently exploited to provide high informative images of the internal status of manmade structures in a non destructive and minimally invasive way. In this framework, as an alternative to the wide adopted radar techniques, Microwave Tomography approaches are worth to be considered. As a matter of fact, they may accurately reconstruct the permittivity and conductivity distributions of a given region from the knowledge of a set of incident fields and measures of the corresponding scattered fields. As far as cultural heritage conservation is concerned, this allow not only to detect the anomalies, which can possibly damage the integrity and the stability of the structure, but also characterize their morphology and electric features, which are useful information to properly address the repair actions. However, since a non linear and ill-posed inverse scattering problem has to be solved, proper regularization strategies and sophisticated data processing tools have to be adopt to assure the reliability of the results. To pursue this aim, in the last years huge attention has been focused on the advantages introduced by diversity in data acquisition (multi-frequency/static/view data) [1,2] as well as on the analysis of the factors affecting the solution of an inverse scattering problem [3]. Moreover, how the degree of non linearity of the relationship between the scattered field and the electromagnetic parameters of the targets can be changed by properly choosing the mathematical model adopt to formulate the scattering problem has been shown in [4]. Exploiting the above results, in this work we propose an imaging procedure in which the inverse scattering problem is formulated as an optimization problem where the mathematical relationship between data and unknowns is expressed by means of a convenient integral equations model and the sought solution is defined as the global minimum of a cost functional. In particular, a local minimization scheme is exploited and a pre-processing step, devoted to preliminary asses the location and the shape of the anomalies, is exploited. The effectiveness of the proposed strategy has been preliminary assessed by means of numerical examples concerning the diagnostic of masonry structures, which will be shown in the Conference. [1] O. M. Bucci, L. Crocco, T. Isernia, and V. Pascazio, Subsurface inverse scattering problems: Quantifying, qualifying and achieving the available information, IEEE Trans. Geosci. Remote Sens., 39(5), 2527-2538, 2001. [2] R. Persico, R. Bernini, and F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the distorted Born approximation," IEEE Trans. Antennas Propag., vol. 53, no. 6, pp. 1875-1887, Jun. 2005. [3] I. Catapano, L. Crocco, M. D'Urso, T. Isernia, "On the Effect of Support Estimation and of a New Model in 2-D Inverse Scattering Problems," IEEE Trans. Antennas Propagat., vol.55, no.6, pp.1895-1899, 2007. [4] M. D'Urso, I. Catapano, L. Crocco and T. Isernia, Effective solution of 3D scattering problems via series expansions: applicability and a new hybrid scheme, IEEE Trans. On Geosci. Remote Sens., vol.45, no.3, pp. 639-648, 2007.
Visualizing Phylogenetic Treespace Using Cartographic Projections
NASA Astrophysics Data System (ADS)
Sundberg, Kenneth; Clement, Mark; Snell, Quinn
Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger datasets.
Some Problems of Exploitation of Jet Turbine Aircraft Engines of Lot Polish Air Lines,
1977-04-26
CI ‘AD~AOII6 221 FOREIGN TECHNOLOGY DIV WR IGHT—PATTERSON AFB OHIO F/I 21/5SOME PROBLEMS OF EXPLOITATION OF JET TURBINE AIRCRAFT ENGINES O—CTC(U...EXPLOITATION OF JET TURBINE AIRCRAFT ENGINES OF LOT POLISH AIR LINE S By: Andrzej Slodownik English pages: 1~ Source: Technika Lotnicza I Astronautyczna...SOME PROBLEMS OF EXPLOITATION OF JET TURBINE AIRCRAFT ENGINES OF LOT POLISH AIR LINES Andrzej Slodownik , M. Eng . FTD— ID ( RS) I— 0 1475 — 77 I
Forecasting Electricity Prices in an Optimization Hydrothermal Problem
NASA Astrophysics Data System (ADS)
Matías, J. M.; Bayón, L.; Suárez, P.; Argüelles, A.; Taboada, J.
2007-12-01
This paper presents an economic dispatch algorithm in a hydrothermal system within the framework of a competitive and deregulated electricity market. The optimization problem of one firm is described, whose objective function can be defined as its profit maximization. Since next-day price forecasting is an aspect crucial, this paper proposes an efficient yet highly accurate next-day price new forecasting method using a functional time series approach trying to exploit the daily seasonal structure of the series of prices. For the optimization problem, an optimal control technique is applied and Pontryagin's theorem is employed.
On the use of cartographic projections in visualizing phylo-genetic tree space
2010-01-01
Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger data sets. PMID:20529355
Exploiting Recurring Structure in a Semantic Network
NASA Technical Reports Server (NTRS)
Wolfe, Shawn R.; Keller, Richard M.
2004-01-01
With the growing popularity of the Semantic Web, an increasing amount of information is becoming available in machine interpretable, semantically structured networks. Within these semantic networks are recurring structures that could be mined by existing or novel knowledge discovery methods. The mining of these semantic structures represents an interesting area that focuses on mining both for and from the Semantic Web, with surprising applicability to problems confronting the developers of Semantic Web applications. In this paper, we present representative examples of recurring structures and show how these structures could be used to increase the utility of a semantic repository deployed at NASA.
CSM Testbed Development and Large-Scale Structural Applications
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Gillian, R. E.; Mccleary, Susan L.; Lotts, C. G.; Poole, E. L.; Overman, A. L.; Macy, S. C.
1989-01-01
A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.
Accounting for Proof Test Data in a Reliability Based Design Optimization Framework
NASA Technical Reports Server (NTRS)
Ventor, Gerharad; Scotti, Stephen J.
2012-01-01
This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.
FaCSI: A block parallel preconditioner for fluid-structure interaction in hemodynamics
NASA Astrophysics Data System (ADS)
Deparis, Simone; Forti, Davide; Grandperrin, Gwenol; Quarteroni, Alfio
2016-12-01
Modeling Fluid-Structure Interaction (FSI) in the vascular system is mandatory to reliably compute mechanical indicators in vessels undergoing large deformations. In order to cope with the computational complexity of the coupled 3D FSI problem after discretizations in space and time, a parallel solution is often mandatory. In this paper we propose a new block parallel preconditioner for the coupled linearized FSI system obtained after space and time discretization. We name it FaCSI to indicate that it exploits the Factorized form of the linearized FSI matrix, the use of static Condensation to formally eliminate the interface degrees of freedom of the fluid equations, and the use of a SIMPLE preconditioner for saddle-point problems. FaCSI is built upon a block Gauss-Seidel factorization of the FSI Jacobian matrix and it uses ad-hoc preconditioners for each physical component of the coupled problem, namely the fluid, the structure and the geometry. In the fluid subproblem, after operating static condensation of the interface fluid variables, we use a SIMPLE preconditioner on the reduced fluid matrix. Moreover, to efficiently deal with a large number of processes, FaCSI exploits efficient single field preconditioners, e.g., based on domain decomposition or the multigrid method. We measure the parallel performances of FaCSI on a benchmark cylindrical geometry and on a problem of physiological interest, namely the blood flow through a patient-specific femoropopliteal bypass. We analyze the dependence of the number of linear solver iterations on the cores count (scalability of the preconditioner) and on the mesh size (optimality).
The Epistemological Challenges of Social Work Intervention Research
ERIC Educational Resources Information Center
Garrow, Eve E.; Hasenfeld, Yeheskel
2017-01-01
We argue that the dominance of an empiricist epistemology in social work research steers much of the research away from studying and explaining the structural forces that cause the conditions of oppression, exploitation, and social exclusion that are at the roots of the social problems addressed by the profession. It does so because it assumes…
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Camacho-Gómez, C.; Magdaleno, A.; Pereira, E.; Lorenzana, A.
2017-04-01
In this paper we tackle a problem of optimal design and location of Tuned Mass Dampers (TMDs) for structures subjected to earthquake ground motions, using a novel meta-heuristic algorithm. Specifically, the Coral Reefs Optimization (CRO) with Substrate Layer (CRO-SL) is proposed as a competitive co-evolution algorithm with different exploration procedures within a single population of solutions. The proposed approach is able to solve the TMD design and location problem, by exploiting the combination of different types of searching mechanisms. This promotes a powerful evolutionary-like algorithm for optimization problems, which is shown to be very effective in this particular problem of TMDs tuning. The proposed algorithm's performance has been evaluated and compared with several reference algorithms in two building models with two and four floors, respectively.
Transductive multi-view zero-shot learning.
Fu, Yanwei; Hospedales, Timothy M; Xiang, Tao; Gong, Shaogang
2015-11-01
Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset/domain are biased when applied directly to the target dataset/domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding, to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on the Judiciary.
The proceedings of this hearing on the exploitation of children deal with the problems of children and adolescents who run away from home. Family problems and abuse that cause these children to leave home are described by former runaway witnesses. Other testimony is included from several people who work with runaway youths and describe programs to…
Decomposition Algorithm for Global Reachability on a Time-Varying Graph
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2010-01-01
A decomposition algorithm has been developed for global reachability analysis on a space-time grid. By exploiting the upper block-triangular structure, the planning problem is decomposed into smaller subproblems, which is much more scalable than the original approach. Recent studies have proposed the use of a hot-air (Montgolfier) balloon for possible exploration of Titan and Venus because these bodies have thick haze or cloud layers that limit the science return from an orbiter, and the atmospheres would provide enough buoyancy for balloons. One of the important questions that needs to be addressed is what surface locations the balloon can reach from an initial location, and how long it would take. This is referred to as the global reachability problem, where the paths from starting locations to all possible target locations must be computed. The balloon could be driven with its own actuation, but its actuation capability is fairly limited. It would be more efficient to take advantage of the wind field and ride the wind that is much stronger than what the actuator could produce. It is possible to pose the path planning problem as a graph search problem on a directed graph by discretizing the spacetime world and the vehicle actuation. The decomposition algorithm provides reachability analysis of a time-varying graph. Because the balloon only moves in the positive direction in time, the adjacency matrix of the graph can be represented with an upper block-triangular matrix, and this upper block-triangular structure can be exploited to decompose a large graph search problem. The new approach consumes a much smaller amount of memory, which also helps speed up the overall computation when the computing resource has a limited physical memory compared to the problem size.
Poverty-Exploitation-Alienation.
ERIC Educational Resources Information Center
Bronfenbrenner, Martin
1980-01-01
Illustrates how knowledge derived from the discipline of economics can be used to help shed light on social problems such as poverty, exploitation, and alienation, and can help decision makers form policy to minimize these and similar problems. (DB)
NASA Astrophysics Data System (ADS)
Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng
2018-04-01
One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community detection in complex networks.
Automated global structure extraction for effective local building block processing in XCS.
Butz, Martin V; Pelikan, Martin; Llorà, Xavier; Goldberg, David E
2006-01-01
Learning Classifier Systems (LCSs), such as the accuracy-based XCS, evolve distributed problem solutions represented by a population of rules. During evolution, features are specialized, propagated, and recombined to provide increasingly accurate subsolutions. Recently, it was shown that, as in conventional genetic algorithms (GAs), some problems require efficient processing of subsets of features to find problem solutions efficiently. In such problems, standard variation operators of genetic and evolutionary algorithms used in LCSs suffer from potential disruption of groups of interacting features, resulting in poor performance. This paper introduces efficient crossover operators to XCS by incorporating techniques derived from competent GAs: the extended compact GA (ECGA) and the Bayesian optimization algorithm (BOA). Instead of simple crossover operators such as uniform crossover or one-point crossover, ECGA or BOA-derived mechanisms are used to build a probabilistic model of the global population and to generate offspring classifiers locally using the model. Several offspring generation variations are introduced and evaluated. The results show that it is possible to achieve performance similar to runs with an informed crossover operator that is specifically designed to yield ideal problem-dependent exploration, exploiting provided problem structure information. Thus, we create the first competent LCSs, XCS/ECGA and XCS/BOA, that detect dependency structures online and propagate corresponding lower-level dependency structures effectively without any information about these structures given in advance.
Wood, Stacey; Lichtenberg, Peter A.
2017-01-01
Financial exploitation (FE) of older adults is a social issue that is beginning to receive the attention that it deserves in the media thanks to some high profile cases, but empirical research and clinical guidelines on the topic are just emerging. Our review describes the significance of the problem, proposes a theoretical model for conceptualizing FE, and summarizes related areas of research that may be useful to consider in the understanding of FE. We discuss structural issues that have limited interventions in the past and make specific public policy recommendations in light of the largest intergenerational transfer of wealth in history. Finally, we discuss implications for clinical practice. PMID:28452630
NASA Astrophysics Data System (ADS)
Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.
2017-10-01
A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.
Aggarwal, Ankush; Sacks, Michael S
2016-08-01
Determining the biomechanical behavior of heart valve leaflet tissues in a noninvasive manner remains an important clinical goal. While advances in 3D imaging modalities have made in vivo valve geometric data available, optimal methods to exploit such information in order to obtain functional information remain to be established. Herein we present and evaluate a novel leaflet shape-based framework to estimate the biomechanical behavior of heart valves from surface deformations by exploiting tissue structure. We determined accuracy levels using an "ideal" in vitro dataset, in which the leaflet geometry, strains, mechanical behavior, and fibrous structure were known to a high level of precision. By utilizing a simplified structural model for the leaflet mechanical behavior, we were able to limit the number of parameters to be determined per leaflet to only two. This approach allowed us to dramatically reduce the computational time and easily visualize the cost function to guide the minimization process. We determined that the image resolution and the number of available imaging frames were important components in the accuracy of our framework. Furthermore, our results suggest that it is possible to detect differences in fiber structure using our framework, thus allowing an opportunity to diagnose asymptomatic valve diseases and begin treatment at their early stages. Lastly, we observed good agreement of the final resulting stress-strain response when an averaged fiber architecture was used. This suggests that population-averaged fiber structural data may be sufficient for the application of the present framework to in vivo studies, although clearly much work remains to extend the present approach to in vivo problems.
NASA Astrophysics Data System (ADS)
Han, D.; Cao, G.; Currell, M. J.
2016-12-01
Understanding the mechanism of salt water transport in response to the exploitation of deep freshwater has long been one of the major regional environmental hydrogeological problems and scientific challenges in the North China Plain. It is also the key to a correct understanding of the sources of deep groundwater pumpage. This study will look at the Hengshui - Cangzhou region as a region with typical vertical salt water distribution, and high levels of groundwater exploitation, integrating a variety of techniques in geology, hydrogeology, geophysics, hydrodynamics, and hydrochemistry - stable isotopes. Information about the problem will be determined using multiple lines of evidence, including field surveys of drilling and water sampling, as well as laboratory experiments and physical and numerical simulations. The project will characterize and depict the migration characteristics of salt water bodies and their relationship with the geological structure and deep ground water resources. The work will reveal the freshwater-saltwater interface shape; determine the mode and mechanism of hydrodynamic transport and salt transport; estimate the vertical migration time of salt water in a thick aquitard; and develop accurate hydrogeological conceptual models. This work will utilize groundwater variable density flow- solute transport numerical models to simulate the water and salt transport processes in vertical one-dimensional (typical bore) and two-dimensional (typical cross-section) space. Both inversion of the downward movement of saltwater caused by groundwater exploitation through history, and examining future saltwater migration trends under groundwater exploitation scenarios will be conducted, to quantitatively evaluate the impact of salt water migration to the deep groundwater body in the North China Plain. The research results will provide a scientific basis for the sustainable utilization of deep groundwater resources in this area.
Guided wave localization of damage via sparse reconstruction
NASA Astrophysics Data System (ADS)
Levine, Ross M.; Michaels, Jennifer E.; Lee, Sang Jun
2012-05-01
Ultrasonic guided waves are frequently applied for structural health monitoring and nondestructive evaluation of plate-like metallic and composite structures. Spatially distributed arrays of fixed piezoelectric transducers can be used to detect damage by recording and analyzing all pairwise signal combinations. By subtracting pre-recorded baseline signals, the effects due to scatterer interactions can be isolated. Given these residual signals, techniques such as delay-and-sum imaging are capable of detecting flaws, but do not exploit the expected sparse nature of damage. It is desired to determine the location of a possible flaw by leveraging the anticipated sparsity of damage; i.e., most of the structure is assumed to be damage-free. Unlike least-squares methods, L1-norm minimization techniques favor sparse solutions to inverse problems such as the one considered here of locating damage. Using this type of method, it is possible to exploit sparsity of damage by formulating the imaging process as an optimization problem. A model-based damage localization method is presented that simultaneously decomposes all scattered signals into location-based signal components. The method is first applied to simulated data to investigate sensitivity to both model mismatch and additive noise, and then to experimental data recorded from an aluminum plate with artificial damage. Compared to delay-and-sum imaging, results exhibit a significant reduction in both spot size and imaging artifacts when the model is reasonably well-matched to the data.
Tri-track: free software for large-scale particle tracking.
Vallotton, Pascal; Olivier, Sandra
2013-04-01
The ability to correctly track objects in time-lapse sequences is important in many applications of microscopy. Individual object motions typically display a level of dynamic regularity reflecting the existence of an underlying physics or biology. Best results are obtained when this local information is exploited. Additionally, if the particle number is known to be approximately constant, a large number of tracking scenarios may be rejected on the basis that they are not compatible with a known maximum particle velocity. This represents information of a global nature, which should ideally be exploited too. Some time ago, we devised an efficient algorithm that exploited both types of information. The tracking task was reduced to a max-flow min-cost problem instance through a novel graph structure that comprised vertices representing objects from three consecutive image frames. The algorithm is explained here for the first time. A user-friendly implementation is provided, and the specific relaxation mechanism responsible for the method's effectiveness is uncovered. The software is particularly competitive for complex dynamics such as dense antiparallel flows, or in situations where object displacements are considerable. As an application, we characterize a remarkable vortex structure formed by bacteria engaged in interstitial motility.
Application of a distributed network in computational fluid dynamic simulations
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish
1994-01-01
A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.
[Problems in development of agriculture-animal husbandry ecotone and its countermeasures].
Baoyin, Taogetao; Bai, Yongfei
2004-02-01
Problems in development of Duolun, a typical agriculture-animal husbandry ecotone, and its countermeasures were discussed in this paper. Economic structure was not rational in Duolun, and it should develop industry and commerce, limit the scope of agriculture and animal husbandry, and actively increase efficiency of agriculture and animal husbandry. The structure of land use was not rational, and the main countermeasures were to increase area of forestland and grassland, and decrease cultivated area. On resources use, the main countermeasures were to exploit water resource rationally and bring into play resource superiority of mutually benefits on agriculture and animal husbandry. The ecological environment construction was the foundation of the national economy for sustainable development in agriculture-animal husbandry ecotone.
Research directions in large scale systems and decentralized control
NASA Technical Reports Server (NTRS)
Tenney, R. R.
1980-01-01
Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.
Solution of a large hydrodynamic problem using the STAR-100 computer
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Howser, L. M.
1976-01-01
A representative hydrodynamics problem, the shock initiated flow over a flat plate, was used for exploring data organizations and program structures needed to exploit the STAR-100 vector processing computer. A brief description of the problem is followed by a discussion of how each portion of the computational process was vectorized. Finally, timings of different portions of the program are compared with equivalent operations on serial machines. The speed up of the STAR-100 over the CDC 6600 program is shown to increase as the problem size increases. All computations were carried out on a CDC 6600 and a CDC STAR 100, with code written in FORTRAN for the 6600 and in STAR FORTRAN for the STAR 100.
Exploration versus exploitation in space, mind, and society
Hills, Thomas T.; Todd, Peter M.; Lazer, David; Redish, A. David; Couzin, Iain D.
2015-01-01
Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope. PMID:25487706
Cole, Jennifer; Sprang, Ginny; Lee, Robert; Cohen, Judith
2016-01-01
This study examined the demographic features, trauma profiles, clinical severity indicators, problem behaviors, and service utilization characteristics of youth victims of commercial sexual exploitation (CSE) compared with a matched sample of sexually abused/assaulted youth who were not exploited in commercial sex. Secondary data analysis and propensity score matching were used to select a sample of 215 help-seeking youth who were exploited in prostitution (n = 43) or who were sexually abused/assaulted but not exploited in prostitution (n = 172) from the National Child Traumatic Stress Network Core Data Set (NCTSN CDS). Propensity Score Matching was used to select a comparison sample based on age, race, ethnicity, and primary residence. Statistically significant differences were noted between the groups on standardized (e.g., UCLA Posttraumatic Stress Disorder Reaction Index [PTSD-RI], Child Behavior Checklist [CBCL]) and other measures of emotional and behavioral problems (e.g., avoidance and hyperarousal symptoms, dissociation, truancy, running away, conduct disorder, sexualized behaviors, and substance abuse). This study provides useful insight into the symptom and service utilization profiles of youth exploited in commercial sex as compared with youth with other types of sexually exploitive experiences. Targeted screening and event-sensitive measures are recommended to more accurately identify youth exploited in commercial sex. More research is needed to determine if and what modifications to trauma therapies may be required to address the more severe symptomatology and behavior problems associated with youth exploited in commercial sex. © The Author(s) 2014.
Ideal regularization for learning kernels from labels.
Pan, Binbin; Lai, Jianhuang; Shen, Lixin
2014-08-01
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rational Exploitation and Utilizing of Groundwater in Jiangsu Coastal Area
NASA Astrophysics Data System (ADS)
Kang, B.; Lin, X.
2017-12-01
Jiangsu coastal area is located in the southeast coast of China, where is a new industrial base and an important coastal and Land Resources Development Zone of China. In the areas with strong human exploitation activities, regional groundwater evolution is obviously affected by human activities. In order to solve the environmental geological problems caused by groundwater exploitation fundamentally, we must find out the forming conditions of regional groundwater hydrodynamic field, and the impact of human activities on groundwater hydrodynamic field evolution and hydrogeochemical evolition. Based on these results, scientific management and reasonable exploitation of the regional groundwater resources can be provided for the utilization. Taking the coastal area of Jiangsu as the research area, we investigate and analyze of the regional hydrogeological conditions. The numerical simulation model of groundwater flow was established according to the water power, chemical and isotopic methods, the conditions of water flow and the influence of hydrodynamic field on the water chemical field. We predict the evolution of regional groundwater dynamics under the influence of human activities and climate change and evaluate the influence of groundwater dynamic field evolution on the environmental geological problems caused by groundwater exploitation under various conditions. We get the following conclusions: Three groundwater exploitation optimal schemes were established. The groundwater salinization was taken as the primary control condition. The substitution model was proposed to model groundwater exploitation and water level changes by BP network method.Then genetic algorithm was used to solve the optimization solution. Three groundwater exploitation optimal schemes were submit to local water resource management. The first sheme was used to solve the groundwater salinization problem. The second sheme focused on dual water supply. The third sheme concerned on emergency water supppy. This is the first time environment problem taken as water management objectinve in this coastal area.
What Is Going on Inside the Arrows? Discovering the Hidden Springs in Causal Models
Murray-Watters, Alexander; Glymour, Clark
2016-01-01
Using Gebharter's (2014) representation, we consider aspects of the problem of discovering the structure of unmeasured sub-mechanisms when the variables in those sub-mechanisms have not been measured. Exploiting an early insight of Sober's (1998), we provide a correct algorithm for identifying latent, endogenous structure—sub-mechanisms—for a restricted class of structures. The algorithm can be merged with other methods for discovering causal relations among unmeasured variables, and feedback relations between measured variables and unobserved causes can sometimes be learned. PMID:27313331
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
Ecological and evolutionary consequences of explicit spatial structure in exploiter-victim systems
NASA Astrophysics Data System (ADS)
Klopfer, Eric David
One class of spatial model which has been widely used in ecology has been termed "pseudo-spatial models" and classically employs various types of aggregation in studying the coexistence of competing parasitoids. Yet, little is known about the relative effects of each of these aggregation behaviors. Thus, in Chapter 1 I chose to examine three types of aggregation and explore their relative strengths in promoting coexistence of two competing parasitoids. A striking shortcoming of spatial models in ecology to date is that there is a relative lack of use of spatial models to investigate problems on the evolutionary as opposed to ecological time scale. Consequently, in Chapter 2 I chose to start with a classic problem of evolutionary time scale--the evolution of virulence and predation rates. Debate about this problem has continued through several decades, yet many instances are not adequately explained by current models. In this study I explored the effect of explicit spatial structure on exploitation rates by comparing a cellular automata (CA) exploiter-victim model which incorporates local dynamics to a metapopulation model which does not include such dynamics. One advantage of CA models is that they are defined by simple rules rather than the often complex equations of other types of spatial models. This is an extremely useful attribute when one wants to convey results of models to an audience with an applied bent that is often uncomfortable with hard-to-understand equations. Thus, in Chapter 3, through the use of CA models I show that there are spatial phenomena which alter the impact of introduced predators and that these phenomena are potentially important in the implementation of biocontrol programs. The relatively recent incorporation of spatial models into the ecological literature has left most ecologists and evolutionary biologists without the ability to understand, let alone employ, spatial models in evolutionary problems. In order to give the next generation of potential ecologists a better understanding of these models, in Chapter 4 I present an interactive tutorial in which students are able to explore the most well studied of these models (the evolution of cooperation in a spatial environment).
Exploiting Symmetry on Parallel Architectures.
NASA Astrophysics Data System (ADS)
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
Symmetric Trajectories for the 2N-Body Problem with Equal Masses
NASA Astrophysics Data System (ADS)
Terracini, Susanna; Venturelli, Andrea
2007-06-01
We consider the problem of 2 N bodies of equal masses in mathbb{R}^3 for the Newtonian-like weak-force potential r -σ, and we prove the existence of a family of collision-free nonplanar and nonhomographic symmetric solutions that are periodic modulo rotations. In addition, the rotation number with respect to the vertical axis ranges in a suitable interval. These solutions have the hip-hop symmetry, a generalization of that introduced in [19], for the case of many bodies and taking account of a topological constraint. The argument exploits the variational structure of the problem, and is based on the minimization of Lagrangian action on a given class of paths.
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
Interactive visualization tools for the structural biologist.
Porebski, Benjamin T; Ho, Bosco K; Buckle, Ashley M
2013-10-01
In structural biology, management of a large number of Protein Data Bank (PDB) files and raw X-ray diffraction images often presents a major organizational problem. Existing software packages that manipulate these file types were not designed for these kinds of file-management tasks. This is typically encountered when browsing through a folder of hundreds of X-ray images, with the aim of rapidly inspecting the diffraction quality of a data set. To solve this problem, a useful functionality of the Macintosh operating system (OSX) has been exploited that allows custom visualization plugins to be attached to certain file types. Software plugins have been developed for diffraction images and PDB files, which in many scenarios can save considerable time and effort. The direct visualization of diffraction images and PDB structures in the file browser can be used to identify key files of interest simply by scrolling through a list of files.
The mathematical statement for the solving of the problem of N-version software system design
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.
2015-10-01
The N-version programming, as a methodology of the fault-tolerant software systems design, allows successful solving of the mentioned tasks. The use of N-version programming approach turns out to be effective, since the system is constructed out of several parallel executed versions of some software module. Those versions are written to meet the same specification but by different programmers. The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality.
An exploratory model of girls' vulnerability to commercial sexual exploitation in prostitution.
Reid, Joan A
2011-05-01
Due to inaccessibility of child victims of commercial sexual exploitation, the majority of emergent research on the problem lacks theoretical framing or sufficient data for quantitative analysis. Drawing from Agnew's general strain theory, this study utilized structural equation modeling to explore: whether caregiver strain is linked to child maltreatment, if experiencing maltreatment is associated with risk-inflating behaviors or sexual denigration of self/others, and if these behavioral and psychosocial dysfunctions are related to vulnerability to commercial sexual exploitation. The proposed model was tested with data from 174 predominately African American women, 12% of whom indicated involvement in prostitution while a minor. Findings revealed child maltreatment worsened with increased caregiver strain. Experiencing child maltreatment was linked to running away, initiating substance use at earlier ages, and higher levels of sexual denigration of self/others. Sexual denigration of self/others was significantly related to the likelihood of prostitution as a minor. The network of variables in the model accounted for 34% of the variance in prostitution as a minor.
A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images
NASA Technical Reports Server (NTRS)
Memon, Nasir D.; Galatsanos, Nikolas
1995-01-01
In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.
Exploiting Quantum Resonance to Solve Combinatorial Problems
NASA Technical Reports Server (NTRS)
Zak, Michail; Fijany, Amir
2006-01-01
Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.
Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas
2017-01-01
As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.
Genetic Network Programming with Reconstructed Individuals
NASA Astrophysics Data System (ADS)
Ye, Fengming; Mabu, Shingo; Wang, Lutao; Eto, Shinji; Hirasawa, Kotaro
A lot of research on evolutionary computation has been done and some significant classical methods such as Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Programming (EP), and Evolution Strategies (ES) have been studied. Recently, a new approach named Genetic Network Programming (GNP) has been proposed. GNP can evolve itself and find the optimal solution. It is based on the idea of Genetic Algorithm and uses the data structure of directed graphs. Many papers have demonstrated that GNP can deal with complex problems in the dynamic environments very efficiently and effectively. As a result, recently, GNP is getting more and more attentions and is used in many different areas such as data mining, extracting trading rules of stock markets, elevator supervised control systems, etc., and GNP has obtained some outstanding results. Focusing on the GNP's distinguished expression ability of the graph structure, this paper proposes a method named Genetic Network Programming with Reconstructed Individuals (GNP-RI). The aim of GNP-RI is to balance the exploitation and exploration of GNP, that is, to strengthen the exploitation ability by using the exploited information extensively during the evolution process of GNP and finally obtain better performances than that of GNP. In the proposed method, the worse individuals are reconstructed and enhanced by the elite information before undergoing genetic operations (mutation and crossover). The enhancement of worse individuals mimics the maturing phenomenon in nature, where bad individuals can become smarter after receiving a good education. In this paper, GNP-RI is applied to the tile-world problem which is an excellent bench mark for evaluating the proposed architecture. The performance of GNP-RI is compared with that of the conventional GNP. The simulation results show some advantages of GNP-RI demonstrating its superiority over the conventional GNPs.
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, James, E-mail: 9jhb3@queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca
In this paper we show that it is possible to use an iterative eigensolver in conjunction with Halverson and Poirier’s symmetrized Gaussian (SG) basis [T. Halverson and B. Poirier, J. Chem. Phys. 137, 224101 (2012)] to compute accurate vibrational energy levels of molecules with as many as five atoms. This is done, without storing and manipulating large matrices, by solving a regular eigenvalue problem that makes it possible to exploit direct-product structure. These ideas are combined with a new procedure for selecting which basis functions to use. The SG basis we work with is orders of magnitude smaller than themore » basis made by using a classical energy criterion. We find significant convergence errors in previous calculations with SG bases. For sum-of-product Hamiltonians, SG bases large enough to compute accurate levels are orders of magnitude larger than even simple pruned bases composed of products of harmonic oscillator functions.« less
Modeling work of the dispatching service of high-rise building as queuing system
NASA Astrophysics Data System (ADS)
Dement'eva, Marina; Dement'eva, Anastasiya
2018-03-01
The article presents the results of calculating the performance indicators of the dispatcher service of a high-rise building as a queuing system with an unlimited queue. The calculation was carried out for three models: with a single control room and brigade of service, with a single control room and a specialized service, with several dispatch centers and specialized services. The aim of the work was to investigate the influence of the structural scheme of the organization of the dispatcher service of a high-rise building on the amount of operating costs and the time of processing and fulfilling applications. The problems of high-rise construction and their impact on the complication of exploitation are analyzed. The composition of exploitation activities of high-rise buildings is analyzed. The relevance of the study is justified by the need to review the role of dispatch services in the structure of management of the quality of buildings. Dispatching service from the lower level of management of individual engineering systems becomes the main link in the centralized automated management of the exploitation of high-rise buildings. With the transition to market relations, the criterion of profitability at the organization of the dispatching service becomes one of the main parameters of the effectiveness of its work. A mathematical model for assessing the efficiency of the dispatching service on a set of quality of service indicators is proposed. The structure of operating costs is presented. The algorithm of decision-making is given when choosing the optimal structural scheme of the dispatching service of a high-rise building.
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
NASA Astrophysics Data System (ADS)
Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.
2016-12-01
Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.
Decision-Theoretic Control of Planetary Rovers
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo; Washington, Richard; Bernstein, Daniel S.; Mouaddib, Abdel-Illah; Morris, Robert (Technical Monitor)
2003-01-01
Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We describe two decision-theoretic approaches to maximize the productivity of planetary rovers: one based on adaptive planning and the other on hierarchical reinforcement learning. Both approaches map the problem into a Markov decision problem and attempt to solve a large part of the problem off-line, exploiting the structure of the plan and independence between plan components. We examine the advantages and limitations of these techniques and their scalability.
A Primer on Foraging and the Explore/Exploit Trade-Off for Psychiatry Research.
Addicott, M A; Pearson, J M; Sweitzer, M M; Barack, D L; Platt, M L
2017-09-01
Foraging is a fundamental behavior, and many types of animals appear to have solved foraging problems using a shared set of mechanisms. Perhaps the most common foraging problem is the choice between exploiting a familiar option for a known reward and exploring unfamiliar options for unknown rewards-the so-called explore/exploit trade-off. This trade-off has been studied extensively in behavioral ecology and computational neuroscience, but is relatively new to the field of psychiatry. Explore/exploit paradigms can offer psychiatry research a new approach to studying motivation, outcome valuation, and effort-related processes, which are disrupted in many mental and emotional disorders. In addition, the explore/exploit trade-off encompasses elements of risk-taking and impulsivity-common behaviors in psychiatric disorders-and provides a novel framework for understanding these behaviors within an ecological context. Here we explain relevant concepts and some common paradigms used to measure explore/exploit decisions in the laboratory, review clinically relevant research on the neurobiology and neuroanatomy of explore/exploit decision making, and discuss how computational psychiatry can benefit from foraging theory.
Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan
2016-08-22
Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.
Programming Probabilistic Structural Analysis for Parallel Processing Computer
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.
1991-01-01
The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
A Matrix-Free Algorithm for Multidisciplinary Design Optimization
NASA Astrophysics Data System (ADS)
Lambe, Andrew Borean
Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.
Some estimation formulae for continuous time-invariant linear systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Sidhu, G. S.
1975-01-01
In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis
NASA Astrophysics Data System (ADS)
Lozovanu, D.; Pickl, S. W.; Weber, G.-W.
2004-08-01
This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.
NASA Astrophysics Data System (ADS)
Bouter, Anton; Alderliesten, Tanja; Bosman, Peter A. N.
2017-02-01
Taking a multi-objective optimization approach to deformable image registration has recently gained attention, because such an approach removes the requirement of manually tuning the weights of all the involved objectives. Especially for problems that require large complex deformations, this is a non-trivial task. From the resulting Pareto set of solutions one can then much more insightfully select a registration outcome that is most suitable for the problem at hand. To serve as an internal optimization engine, currently used multi-objective algorithms are competent, but rather inefficient. In this paper we largely improve upon this by introducing a multi-objective real-valued adaptation of the recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) for discrete optimization. In this work, GOMEA is tailored specifically to the problem of deformable image registration to obtain substantially improved efficiency. This improvement is achieved by exploiting a key strength of GOMEA: iteratively improving small parts of solutions, allowing to faster exploit the impact of such updates on the objectives at hand through partial evaluations. We performed experiments on three registration problems. In particular, an artificial problem containing a disappearing structure, a pair of pre- and post-operative breast CT scans, and a pair of breast MRI scans acquired in prone and supine position were considered. Results show that compared to the previously used evolutionary algorithm, GOMEA obtains a speed-up of up to a factor of 1600 on the tested registration problems while achieving registration outcomes of similar quality.
ERIC Educational Resources Information Center
Montgomery-Devlin, Jacqui
2008-01-01
The present paper provides an overview of child sexual exploitation in Northern Ireland and related issues. It focuses on Barnardo's response to the problem of sexual exploitation and sets it in both a historical and a contemporary context. The paper considers the importance of recognising exploitation as child abuse and addresses specific myths…
A novel numerical framework for self-similarity in plasticity: Wedge indentation in single crystals
NASA Astrophysics Data System (ADS)
Juul, K. J.; Niordson, C. F.; Nielsen, K. L.; Kysar, J. W.
2018-03-01
A novel numerical framework for analyzing self-similar problems in plasticity is developed and demonstrated. Self-similar problems of this kind include processes such as stationary cracks, void growth, indentation etc. The proposed technique offers a simple and efficient method for handling this class of complex problems by avoiding issues related to traditional Lagrangian procedures. Moreover, the proposed technique allows for focusing the mesh in the region of interest. In the present paper, the technique is exploited to analyze the well-known wedge indentation problem of an elastic-viscoplastic single crystal. However, the framework may be readily adapted to any constitutive law of interest. The main focus herein is the development of the self-similar framework, while the indentation study serves primarily as verification of the technique by comparing to existing numerical and analytical studies. In this study, the three most common metal crystal structures will be investigated, namely the face-centered cubic (FCC), body-centered cubic (BCC), and hexagonal close packed (HCP) crystal structures, where the stress and slip rate fields around the moving contact point singularity are presented.
Development problem analysis of correlation leak detector’s software
NASA Astrophysics Data System (ADS)
Faerman, V. A.; Avramchuk, V. S.; Marukyan, V. M.
2018-05-01
In the article, the practical application and the structure of the correlation leak detectors’ software is studied and the task of its designing is analyzed. In the first part of the research paper, the expediency of the facilities development of correlation leak detectors for the following operating efficiency of public utilities exploitation is shown. The analysis of the functional structure of correlation leak detectors is conducted and its program software tasks are defined. In the second part of the research paper some development steps of the software package – requirement forming, program structure definition and software concept creation – are examined in the context of the usage experience of the hardware-software prototype of correlation leak detector.
Beyond union of subspaces: Subspace pursuit on Grassmann manifold for data representation
Shen, Xinyue; Krim, Hamid; Gu, Yuantao
2016-03-01
Discovering the underlying structure of a high-dimensional signal or big data has always been a challenging topic, and has become harder to tackle especially when the observations are exposed to arbitrary sparse perturbations. Here in this paper, built on the model of a union of subspaces (UoS) with sparse outliers and inspired by a basis pursuit strategy, we exploit the fundamental structure of a Grassmann manifold, and propose a new technique of pursuing the subspaces systematically by solving a non-convex optimization problem using the alternating direction method of multipliers. This problem as noted is further complicated by non-convex constraints onmore » the Grassmann manifold, as well as the bilinearity in the penalty caused by the subspace bases and coefficients. Nevertheless, numerical experiments verify that the proposed algorithm, which provides elegant solutions to the sub-problems in each step, is able to de-couple the subspaces and pursue each of them under time-efficient parallel computation.« less
Algorithms and software for solving finite element equations on serial and parallel architectures
NASA Technical Reports Server (NTRS)
George, Alan
1989-01-01
Over the past 15 years numerous new techniques have been developed for solving systems of equations and eigenvalue problems arising in finite element computations. A package called SPARSPAK has been developed by the author and his co-workers which exploits these new methods. The broad objective of this research project is to incorporate some of this software in the Computational Structural Mechanics (CSM) testbed, and to extend the techniques for use on multiprocessor architectures.
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Friend or foe: exploiting sensor failures for transparent object localization and classification
NASA Astrophysics Data System (ADS)
Seib, Viktor; Barthen, Andreas; Marohn, Philipp; Paulus, Dietrich
2017-02-01
In this work we address the problem of detecting and recognizing transparent objects using depth images from an RGB-D camera. Using this type of sensor usually prohibits the localization of transparent objects since the structured light pattern of these cameras is not reflected by transparent surfaces. Instead, transparent surfaces often appear as undefined values in the resulting images. However, these erroneous sensor readings form characteristic patterns that we exploit in the presented approach. The sensor data is fed into a deep convolutional neural network that is trained to classify and localize drinking glasses. We evaluate our approach with four different types of transparent objects. To our best knowledge, no datasets offering depth images of transparent objects exist so far. With this work we aim at closing this gap by providing our data to the public.
Utilization Elementary Siphons of Petri Net to Solved Deadlocks in Flexible Manufacturing Systems
NASA Astrophysics Data System (ADS)
Abdul-Hussin, Mowafak Hassan
2015-07-01
This article presents an approach to the constructing a class structural analysis of Petri nets, where elementary siphons are mainly used in the development of a deadlock control policy of flexible manufacturing systems (FMSs), that has been exploited successfully for the design of supervisors of some supervisory control problems. Deadlock-free operation of FMSs is significant objectives of siphons in the Petri net. The structure analysis of Petri net models has efficiency in control of FMSs, however different policy can be implemented for the deadlock prevention. Petri nets models based deadlock prevention for FMS's has gained considerable interest in the development of control theory and methods for design, controlling, operation, and performance evaluation depending of the special class of Petri nets called S3PR. Both structural analysis and reachability tree analysis is used for the purposes analysis, simulation and control of Petri nets. In our ex-perimental approach based to siphon is able to resolve the problem of deadlock occurred to Petri nets that are illustrated with an FMS.
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Macready, William G.
2005-01-01
Recent work on the mathematical foundations of optimization has begun to uncover its rich structure. In particular, the "No Free Lunch" (NFL) theorems state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance. In this paper we present a general framework covering more search scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multi-armed bandit problems and evolution of multiple co-evolving players. As a particular instance of the latter, it covers "self-play" problems. In these problems the set of players work together to produce a champion, who then engages one or more antagonists in a subsequent multi-player game. In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. We consider the implications of these results to biology where there is no champion.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
2012-01-01
Background Many trafficked people suffer high levels of physical, sexual and psychological abuse. Yet, there has been limited research on the physical health problems associated with human trafficking or how the health needs of women in post-trafficking support settings vary according to socio-demographic or trafficking characteristics. Methods We analysed the prevalence and severity of 15 health symptoms reported by 120 trafficked women who had returned to Moldova between December 2007 and December 2008 and were registered with the International Organisation for Migration Assistance and Protection Programme. Women had returned to Moldova an average of 5.9 months prior to interview (range 2-12 months). Results Headaches (61.7%), stomach pain (60.9%), memory problems (44.2%), back pain (42.5%), loss of appetite (35%), and tooth pain (35%) were amongst the most commonly reported symptoms amongst both women trafficked for sexual exploitation and women trafficked for labour exploitation. The prevalence of headache and memory problems was strongly associated with duration of exploitation. Conclusions Trafficked women who register for post-trafficking support services after returning to their country of origin are likely to have long-term physical and dental health needs and should be provided with access to comprehensive medical services. Health problems among women who register for post-trafficking support services after returning to their country of origin are not limited to women trafficked for sexual exploitation but are also experienced by victims of labour exploitation. PMID:22834807
Oram, Siân; Ostrovschi, Nicolae V; Gorceag, Viorel I; Hotineanu, Mihai A; Gorceag, Lilia; Trigub, Carolina; Abas, Melanie
2012-07-26
Many trafficked people suffer high levels of physical, sexual and psychological abuse. Yet, there has been limited research on the physical health problems associated with human trafficking or how the health needs of women in post-trafficking support settings vary according to socio-demographic or trafficking characteristics. We analysed the prevalence and severity of 15 health symptoms reported by 120 trafficked women who had returned to Moldova between December 2007 and December 2008 and were registered with the International Organisation for Migration Assistance and Protection Programme. Women had returned to Moldova an average of 5.9 months prior to interview (range 2-12 months). Headaches (61.7%), stomach pain (60.9%), memory problems (44.2%), back pain (42.5%), loss of appetite (35%), and tooth pain (35%) were amongst the most commonly reported symptoms amongst both women trafficked for sexual exploitation and women trafficked for labour exploitation. The prevalence of headache and memory problems was strongly associated with duration of exploitation. Trafficked women who register for post-trafficking support services after returning to their country of origin are likely to have long-term physical and dental health needs and should be provided with access to comprehensive medical services. Health problems among women who register for post-trafficking support services after returning to their country of origin are not limited to women trafficked for sexual exploitation but are also experienced by victims of labour exploitation.
Robust Feature Matching in Terrestrial Image Sequences
NASA Astrophysics Data System (ADS)
Abbas, A.; Ghuffar, S.
2018-04-01
From the last decade, the feature detection, description and matching techniques are most commonly exploited in various photogrammetric and computer vision applications, which includes: 3D reconstruction of scenes, image stitching for panoramic creation, image classification, or object recognition etc. However, in terrestrial imagery of urban scenes contains various issues, which include duplicate and identical structures (i.e. repeated windows and doors) that cause the problem in feature matching phase and ultimately lead to failure of results specially in case of camera pose and scene structure estimation. In this paper, we will address the issue related to ambiguous feature matching in urban environment due to repeating patterns.
Fishing and temperature effects on the size structure of exploited fish stocks.
Tu, Chen-Yi; Chen, Kuan-Ting; Hsieh, Chih-Hao
2018-05-08
Size structure of fish stock plays an important role in maintaining sustainability of the population. Size distribution of an exploited stock is predicted to shift toward small individuals caused by size-selective fishing and/or warming; however, their relative contribution remains relatively unexplored. In addition, existing analyses on size structure have focused on univariate size-based indicators (SBIs), such as mean length, evenness of size classes, or the upper 95-percentile of the length frequency distribution; these approaches may not capture full information of size structure. To bridge the gap, we used the variation partitioning approach to examine how the size structure (composition of size classes) responded to fishing, warming and the interaction. We analyzed 28 exploited stocks in the West US, Alaska and North Sea. Our result shows fishing has the most prominent effect on the size structure of the exploited stocks. In addition, the fish stocks experienced higher variability in fishing is more responsive to the temperature effect in their size structure, suggesting that fishing may elevate the sensitivity of exploited stocks in responding to environmental effects. The variation partitioning approach provides complementary information to univariate SBIs in analyzing size structure.
Geometry Helps to Compare Persistence Diagrams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerber, Michael; Morozov, Dmitriy; Nigmetov, Arnur
2015-11-16
Exploiting geometric structure to improve the asymptotic complexity of discrete assignment problems is a well-studied subject. In contrast, the practical advantages of using geometry for such problems have not been explored. We implement geometric variants of the Hopcroft--Karp algorithm for bottleneck matching (based on previous work by Efrat el al.), and of the auction algorithm by Bertsekas for Wasserstein distance computation. Both implementations use k-d trees to replace a linear scan with a geometric proximity query. Our interest in this problem stems from the desire to compute distances between persistence diagrams, a problem that comes up frequently in topological datamore » analysis. We show that our geometric matching algorithms lead to a substantial performance gain, both in running time and in memory consumption, over their purely combinatorial counterparts. Moreover, our implementation significantly outperforms the only other implementation available for comparing persistence diagrams.« less
Concurrency-based approaches to parallel programming
NASA Technical Reports Server (NTRS)
Kale, L.V.; Chrisochoides, N.; Kohl, J.; Yelick, K.
1995-01-01
The inevitable transition to parallel programming can be facilitated by appropriate tools, including languages and libraries. After describing the needs of applications developers, this paper presents three specific approaches aimed at development of efficient and reusable parallel software for irregular and dynamic-structured problems. A salient feature of all three approaches in their exploitation of concurrency within a processor. Benefits of individual approaches such as these can be leveraged by an interoperability environment which permits modules written using different approaches to co-exist in single applications.
Risk-aware multi-armed bandit problem with application to portfolio selection
Huo, Xiaoguang
2017-01-01
Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return. PMID:29291122
Risk-aware multi-armed bandit problem with application to portfolio selection.
Huo, Xiaoguang; Fu, Feng
2017-11-01
Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return.
Leadership, followership, and evolution: some lessons from the past.
Van Vugt, Mark; Hogan, Robert; Kaiser, Robert B
2008-04-01
This article analyzes the topic of leadership from an evolutionary perspective and proposes three conclusions that are not part of mainstream theory. First, leading and following are strategies that evolved for solving social coordination problems in ancestral environments, including in particular the problems of group movement, intragroup peacekeeping, and intergroup competition. Second, the relationship between leaders and followers is inherently ambivalent because of the potential for exploitation of followers by leaders. Third, modern organizational structures are sometimes inconsistent with aspects of our evolved leadership psychology, which might explain the alienation and frustration of many citizens and employees. The authors draw several implications of this evolutionary analysis for leadership theory, research, and practice.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.
On nonlinear finite element analysis in single-, multi- and parallel-processors
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R.; Islam, M.; Salama, M.
1982-01-01
Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
The NASA controls-structures interaction technology program
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Layman, W. E.; Waites, H. B.; Hayduk, R. J.
1990-01-01
The interaction between a flexible spacecraft structure and its control system is commonly referred to as controls-structures interaction (CSI). The CSI technology program is developing the capability and confidence to integrate the structure and control system, so as to avoid interactions that cause problems and to exploit interactions to increase spacecraft capability. A NASA program has been initiated to advance CSI technology to a point where it can be used in spacecraft design for future missions. The CSI technology program is a multicenter program utilizing the resources of the NASA Langley Research Center (LaRC), the NASA Marshall Space Flight Center (MSFC), and the NASA Jet Propulsion Laboratory (JPL). The purpose is to describe the current activities, results to date, and future activities of the NASA CSI technology program.
Combining conceptual graphs and argumentation for aiding in the teleexpertise.
Doumbouya, Mamadou Bilo; Kamsu-Foguem, Bernard; Kenfack, Hugues; Foguem, Clovis
2015-08-01
Current medical information systems are too complex to be meaningfully exploited. Hence there is a need to develop new strategies for maximising the exploitation of medical data to the benefit of medical professionals. It is against this backdrop that we want to propose a tangible contribution by providing a tool which combines conceptual graphs and Dung׳s argumentation system in order to assist medical professionals in their decision making process. The proposed tool allows medical professionals to easily manipulate and visualise queries and answers for making decisions during the practice of teleexpertise. The knowledge modelling is made using an open application programming interface (API) called CoGui, which offers the means for building structured knowledge bases with the dedicated functionalities of graph-based reasoning via retrieved data from different institutions (hospitals, national security centre, and nursing homes). The tool that we have described in this study supports a formal traceable structure of the reasoning with acceptable arguments to elucidate some ethical problems that occur very often in the telemedicine domain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bridging quantum mechanics and structure-based drug design.
De Vivo, Marco
2011-01-01
The last decade has seen great advances in the use of quantum mechanics (QM) to solve biological problems of pharmaceutical relevance. For instance, enzymatic catalysis is often investigated by means of the so-called QM/MM approach, which uses QM and molecular mechanics (MM) methods to determine the (free) energy landscape of the enzymatic reaction mechanism. Here, I will discuss a few representative examples of QM and QM/MM studies of important metalloenzymes of pharmaceutical interest (i.e. metallophosphatases and metallo-beta-lactamases). This review article aims to show how QM-based methods can be used to elucidate ligand-receptor interactions. The challenge is then to exploit this knowledge for the structure-based design of new and potent inhibitors, such as transition state (TS) analogues that resemble the structure and physicochemical properties of the enzymatic TS. Given the results and potential expressed to date by QM-based methods in studying biological problems, the application of QM in structure-based drug design will likely increase, making of these once-prohibitive computations a routinely used tool for drug design.
Explore or Exploit? A Generic Model and an Exactly Solvable Case
NASA Astrophysics Data System (ADS)
Gueudré, Thomas; Dobrinevski, Alexander; Bouchaud, Jean-Philippe
2014-02-01
Finding a good compromise between the exploitation of known resources and the exploration of unknown, but potentially more profitable choices, is a general problem, which arises in many different scientific disciplines. We propose a stylized model for these exploration-exploitation situations, including population or economic growth, portfolio optimization, evolutionary dynamics, or the problem of optimal pinning of vortices or dislocations in disordered materials. We find the exact growth rate of this model for treelike geometries and prove the existence of an optimal migration rate in this case. Numerical simulations in the one-dimensional case confirm the generic existence of an optimum.
Explore or exploit? A generic model and an exactly solvable case.
Gueudré, Thomas; Dobrinevski, Alexander; Bouchaud, Jean-Philippe
2014-02-07
Finding a good compromise between the exploitation of known resources and the exploration of unknown, but potentially more profitable choices, is a general problem, which arises in many different scientific disciplines. We propose a stylized model for these exploration-exploitation situations, including population or economic growth, portfolio optimization, evolutionary dynamics, or the problem of optimal pinning of vortices or dislocations in disordered materials. We find the exact growth rate of this model for treelike geometries and prove the existence of an optimal migration rate in this case. Numerical simulations in the one-dimensional case confirm the generic existence of an optimum.
Security barriers with automated reconnaissance
McLaughlin, James O; Baird, Adam D; Tullis, Barclay J; Nolte, Roger Allen
2015-04-07
An intrusion delaying barrier includes primary and secondary physical structures and can be instrumented with multiple sensors incorporated into an electronic monitoring and alarm system. Such an instrumented intrusion delaying barrier may be used as a perimeter intrusion defense and assessment system (PIDAS). Problems with not providing effective delay to breaches by intentional intruders and/or terrorists who would otherwise evade detection are solved by attaching the secondary structures to the primary structure, and attaching at least some of the sensors to the secondary structures. By having multiple sensors of various types physically interconnected serves to enable sensors on different parts of the overall structure to respond to common disturbances and thereby provide effective corroboration that a disturbance is not merely a nuisance or false alarm. Use of a machine learning network such as a neural network exploits such corroboration.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Barker, Jessica L.; Bronstein, Judith L.
2016-01-01
Exploitation in cooperative interactions both within and between species is widespread. Although it is assumed to be costly to be exploited, mechanisms to control exploitation are surprisingly rare, making the persistence of cooperation a fundamental paradox in evolutionary biology and ecology. Focusing on between-species cooperation (mutualism), we hypothesize that the temporal sequence in which exploitation occurs relative to cooperation affects its net costs and argue that this can help explain when and where control mechanisms are observed in nature. Our principal prediction is that when exploitation occurs late relative to cooperation, there should be little selection to limit its effects (analogous to “tolerated theft” in human cooperative groups). Although we focus on cases in which mutualists and exploiters are different individuals (of the same or different species), our inferences can readily be extended to cases in which individuals exhibit mixed cooperative-exploitative strategies. We demonstrate that temporal structure should be considered alongside spatial structure as an important process affecting the evolution of cooperation. We also provide testable predictions to guide future empirical research on interspecific as well as intraspecific cooperation. PMID:26841169
Hemmelmayr, Vera C.; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-01-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP. PMID:23483764
Hemmelmayr, Vera C; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-12-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP.
Considering context: reliable entity networks through contextual relationship extraction
NASA Astrophysics Data System (ADS)
David, Peter; Hawes, Timothy; Hansen, Nichole; Nolan, James J.
2016-05-01
Existing information extraction techniques can only partially address the problem of exploiting unreadable-large amounts text. When discussion of events and relationships is limited to simple, past-tense, factual descriptions of events, current NLP-based systems can identify events and relationships and extract a limited amount of additional information. But the simple subset of available information that existing tools can extract from text is only useful to a small set of users and problems. Automated systems need to find and separate information based on what is threatened or planned to occur, has occurred in the past, or could potentially occur. We address the problem of advanced event and relationship extraction with our event and relationship attribute recognition system, which labels generic, planned, recurring, and potential events. The approach is based on a combination of new machine learning methods, novel linguistic features, and crowd-sourced labeling. The attribute labeler closes the gap between structured event and relationship models and the complicated and nuanced language that people use to describe them. Our operational-quality event and relationship attribute labeler enables Warfighters and analysts to more thoroughly exploit information in unstructured text. This is made possible through 1) More precise event and relationship interpretation, 2) More detailed information about extracted events and relationships, and 3) More reliable and informative entity networks that acknowledge the different attributes of entity-entity relationships.
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Girolami, M.
2014-11-01
We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.
Modeling of information on the impact of mining exploitation on bridge objects in BIM
NASA Astrophysics Data System (ADS)
Bętkowski, Piotr
2018-04-01
The article discusses the advantages of BIM (Building Information Modeling) technology in the management of bridge infrastructure on mining areas. The article shows the problems with information flow in the case of bridge objects located on mining areas and the advantages of proper information management, e.g. the possibility of automatic monitoring of structures, improvement of safety, optimization of maintenance activities, cost reduction of damage removal and preventive actions, improvement of atmosphere for mining exploitation, improvement of the relationship between the manager of the bridge and the mine. Traditional model of managing bridge objects on mining areas has many disadvantages, which are discussed in this article. These disadvantages include among others: duplication of information about the object, lack of correlation in investments due to lack of information flow between bridge manager and mine, limited assessment possibilities of damage propagation on technical condition and construction resistance to mining influences.
Hom, Kristin A; Woods, Stephanie J
2013-02-01
Commercial sexual exploitation of women and girls through forced prostitution and sex-trafficking is a human rights and public health issue, with survivors facing complex mental health problems from trauma and violence. An international and domestic problem, the average age of recruitment into sex-trafficking is between 11 and 14 years old. Given its secrecy and brutality, such exploitation remains difficult to study, which results in a lack of knowledge related to trauma and how best to develop specific services that effectively engage and meet the unique needs of survivors. This qualitative research, using thematic analysis, explored the stories of trauma and its aftermath for commercially sexually exploited women as told by front-line service providers. Three themes emerged regarding the experience of sex-trafficking and its outcomes-Pimp Enculturation, Aftermath, and Healing the Wound-along with seven subthemes. These have important implications for all service and healthcare providers.
The Intersection of Financial Exploitation and Financial Capacity
Lichtenberg, P.A.
2016-01-01
Research in the past decade has documented that financial exploitation of older adults has become a major problem and Psychology is only recently increasing its presence in efforts to reduce exploitation. During the same time period, Psychology has been a leader in setting best practices for the assessment of diminished capacity in older adults culminating in the 2008 ABA/APA joint publication on a handbook for psychologists. Assessment of financial decision making capacity is often the cornerstone assessment needed in cases of financial exploitation. This paper will examine the intersection of financial exploitation and decision making capacity; introduce a new conceptual model and new tools for both the investigation and prevention of financial exploitation. PMID:27159438
The exclusion problem in seasonally forced epidemiological systems.
Greenman, J V; Adams, B
2015-02-21
The pathogen exclusion problem is the problem of finding control measures that will exclude a pathogen from an ecological system or, if the system is already disease-free, maintain it in that state. To solve this problem we work within a holistic control theory framework which is consistent with conventional theory for simple systems (where there is no external forcing and constant controls) and seamlessly generalises to complex systems that are subject to multiple component seasonal forcing and targeted variable controls. We develop, customise and integrate a range of numerical and algebraic procedures that provide a coherent methodology powerful enough to solve the exclusion problem in the general case. An important aspect of our solution procedure is its two-stage structure which reveals the epidemiological consequences of the controls used for exclusion. This information augments technical and economic considerations in the design of an acceptable exclusion strategy. Our methodology is used in two examples to show how time-varying controls can exploit the interference and reinforcement created by the external and internal lag structure and encourage the system to 'take over' some of the exclusion effort. On-off control switching, resonant amplification, optimality and controllability are important issues that emerge in the discussion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Coccygodynia - pathogenesis, diagnostics and therapy. Review of the writing.
Dampc, Bogumiła; Słowiński, Krzysztof
2017-08-31
Coccygodynia is a problem with a small percentage (1%) of the population suffering from musculoskeletal disorders. This pain is often associated with trauma, falling on the tailbone, long cycling, or by women after childbirth. The reason for the described problem can be the actual morphological changes. Idiopathic coccygodynia causes therapeutic difficulties to specialists of many fields. Unsatisfactory treatment, including coccygectomy tends to seek new solutions. They belong to them techniques exploited in the manual therapy which in their spectrum hold: direct techniques - per rectum as well as indirect techniques taking into account distant structures of the motor organ, remaining in dense interactions with the coccygeal part. Idiopathic coccygodynia is a result perhaps from exaggerated tension the muscle of the levator ani, coccygeus and gluteus maximus as well as from irritating soft tissue structures surrounding the coccyx: of sacrococcygeum, sacrospinale, and sacrotuberale ligament. Unfortunately we can't see them in objective examinations so as: the RTG, MR or TK, therefore constitute the both diagnostic and therapeutic problem. For describing the problem a writing of the object was used both from the field of the surgery and of manual therapy. Detailed and multifaceted knowledge about causes of the described problem allows more accurately to categorize the patient to the appropriate group and helps to select the best procedure of treatment.
A lifelong learning hyper-heuristic method for bin packing.
Sim, Kevin; Hart, Emma; Paechter, Ben
2015-01-01
We describe a novel hyper-heuristic system that continuously learns over time to solve a combinatorial optimisation problem. The system continuously generates new heuristics and samples problems from its environment; and representative problems and heuristics are incorporated into a self-sustaining network of interacting entities inspired by methods in artificial immune systems. The network is plastic in both its structure and content, leading to the following properties: it exploits existing knowledge captured in the network to rapidly produce solutions; it can adapt to new problems with widely differing characteristics; and it is capable of generalising over the problem space. The system is tested on a large corpus of 3,968 new instances of 1D bin-packing problems as well as on 1,370 existing problems from the literature; it shows excellent performance in terms of the quality of solutions obtained across the datasets and in adapting to dynamically changing sets of problem instances compared to previous approaches. As the network self-adapts to sustain a minimal repertoire of both problems and heuristics that form a representative map of the problem space, the system is further shown to be computationally efficient and therefore scalable.
Field-Programmable Gate Array Computer in Structural Analysis: An Initial Exploration
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Sobieszczanski-Sobieski, Jaroslaw; Brown, Samuel
2002-01-01
This paper reports on an initial assessment of using a Field-Programmable Gate Array (FPGA) computational device as a new tool for solving structural mechanics problems. A FPGA is an assemblage of binary gates arranged in logical blocks that are interconnected via software in a manner dependent on the algorithm being implemented and can be reprogrammed thousands of times per second. In effect, this creates a computer specialized for the problem that automatically exploits all the potential for parallel computing intrinsic in an algorithm. This inherent parallelism is the most important feature of the FPGA computational environment. It is therefore important that if a problem offers a choice of different solution algorithms, an algorithm of a higher degree of inherent parallelism should be selected. It is found that in structural analysis, an 'analog computer' style of programming, which solves problems by direct simulation of the terms in the governing differential equations, yields a more favorable solution algorithm than current solution methods. This style of programming is facilitated by a 'drag-and-drop' graphic programming language that is supplied with the particular type of FPGA computer reported in this paper. Simple examples in structural dynamics and statics illustrate the solution approach used. The FPGA system also allows linear scalability in computing capability. As the problem grows, the number of FPGA chips can be increased with no loss of computing efficiency due to data flow or algorithmic latency that occurs when a single problem is distributed among many conventional processors that operate in parallel. This initial assessment finds the FPGA hardware and software to be in their infancy in regard to the user conveniences; however, they have enormous potential for shrinking the elapsed time of structural analysis solutions if programmed with algorithms that exhibit inherent parallelism and linear scalability. This potential warrants further development of FPGA-tailored algorithms for structural analysis.
Astrophysical data analysis with information field theory
NASA Astrophysics Data System (ADS)
Enßlin, Torsten
2014-12-01
Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented.
About the bears and the bees: Adaptive responses to asymmetric warfare
NASA Astrophysics Data System (ADS)
Ryan, Alex
Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a 'game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.
About the bears and the bees: Adaptive responses to asymmetric warfare
NASA Astrophysics Data System (ADS)
Ryan, Alex
Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a `game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.
Methods of equipment choice in shotcreting
NASA Astrophysics Data System (ADS)
Sharapov, R. R.; Yadykina, V. V.; Stepanov, M. A.; Kitukov, B. A.
2018-03-01
Shotcrete is widely used in architecture, hydraulic engineering structures, finishing works in tunnels, arc covers and ceilings. The problem of the equipment choice in shotcreting is very important. The main issues influencing the equipment choice are quality improvement and intensification of shotcreting. Main parameters and rational limits of technological characteristic of machines used in solving different problems in shotcreting are described. It is suggested to take into account peculiarities of shotcrete mixing processes and peculiarities of applying these mixtures with compressed air kinetic energy. The described method suggests choosing a mixer with the account of energy capacity, Reynolds number and rotational frequency of the mixing drum. The suggested choice procedure of the equipment nomenclature allows decreasing exploitation costs, increasing the quality of shotcrete and shotcreting in general.
Spatiotemporal Characterization of a Fibrin Clot Using Quantitative Phase Imaging
Gannavarpu, Rajshekhar; Bhaduri, Basanta; Tangella, Krishnarao; Popescu, Gabriel
2014-01-01
Studying the dynamics of fibrin clot formation and its morphology is an important problem in biology and has significant impact for several scientific and clinical applications. We present a label-free technique based on quantitative phase imaging to address this problem. Using quantitative phase information, we characterized fibrin polymerization in real-time and present a mathematical model describing the transition from liquid to gel state. By exploiting the inherent optical sectioning capability of our instrument, we measured the three-dimensional structure of the fibrin clot. From this data, we evaluated the fractal nature of the fibrin network and extracted the fractal dimension. Our non-invasive and speckle-free approach analyzes the clotting process without the need for external contrast agents. PMID:25386701
On a full Bayesian inference for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.
Akam, Thomas; Costa, Rui; Dayan, Peter
2015-12-01
The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.
Akam, Thomas; Costa, Rui; Dayan, Peter
2015-01-01
The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806
NASA Astrophysics Data System (ADS)
Han-Ming, Zhang; Lin-Yuan, Wang; Lei, Li; Bin, Yan; Ai-Long, Cai; Guo-En, Hu
2016-07-01
The additional sparse prior of images has been the subject of much research in problems of sparse-view computed tomography (CT) reconstruction. A method employing the image gradient sparsity is often used to reduce the sampling rate and is shown to remove the unwanted artifacts while preserve sharp edges, but may cause blocky or patchy artifacts. To eliminate this drawback, we propose a novel sparsity exploitation-based model for CT image reconstruction. In the presented model, the sparse representation and sparsity exploitation of both gradient and nonlocal gradient are investigated. The new model is shown to offer the potential for better results by introducing a similarity prior information of the image structure. Then, an effective alternating direction minimization algorithm is developed to optimize the objective function with a robust convergence result. Qualitative and quantitative evaluations have been carried out both on the simulation and real data in terms of accuracy and resolution properties. The results indicate that the proposed method can be applied for achieving better image-quality potential with the theoretically expected detailed feature preservation. Project supported by the National Natural Science Foundation of China (Grant No. 61372172).
The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs
NASA Astrophysics Data System (ADS)
Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.
2017-12-01
The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Human trafficking and exploitation: A global health concern.
Zimmerman, Cathy; Kiss, Ligia
2017-11-01
In this collection review, Cathy Zimmerman and colleague introduce the PLOS Medicine Collection on Human Trafficking, Exploitation and Health, laying out the magnitude of the global trafficking problem and offering a public health policy framework to guide responses to trafficking.
Assessment Methods of Groundwater Overdraft Area and Its Application
NASA Astrophysics Data System (ADS)
Dong, Yanan; Xing, Liting; Zhang, Xinhui; Cao, Qianqian; Lan, Xiaoxun
2018-05-01
Groundwater is an important source of water, and long-term large demand make groundwater over-exploited. Over-exploitation cause a lot of environmental and geological problems. This paper explores the concept of over-exploitation area, summarizes the natural and social attributes of over-exploitation area, as well as expounds its evaluation methods, including single factor evaluation, multi-factor system analysis and numerical method. At the same time, the different methods are compared and analyzed. And then taking Northern Weifang as an example, this paper introduces the practicality of appraisal method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-innermore » product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.« less
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Macready, William G.
2005-01-01
Recent work on the foundations of optimization has begun to uncover its underlying rich structure. In particular, the "No Free Lunch" (NFL) theorems [WM97] state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance. In this paper we present a general framework covering most search scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multi-armed bandit problems and evolution of multiple co-evolving agents. As a particular instance of the latter, it covers "self-play" problems. In these problems the agents work together to produce a champion, who then engages one or more antagonists in a subsequent multi-player game In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. However in the typical coevolutionary scenarios encountered in biology, where there is no champion, NFL still holds.
Teixeira, Pedro Hudson Rodrigues; Thel, Thiago do Nascimento; Ferreira, Jullio Marques Rocha; de Azevedo, Severino Mendes; Junior, Wallace Rodrigues Telino; Lyra-Neves, Rachel Maria
2014-12-24
The present study examined the exploitation of bird species by the residents of a rural community in the Brazilian semi-arid zone, and their preferences for species with different characteristics. The 24 informants were identified using the "snowball" approach, and were interviewed using semi-structured questionnaires and check-sheets for the collection of data on their relationship with the bird species that occur in the region. The characteristics that most attract the attention of the interviewees were the song and the coloration of the plumage of a bird, as well as its body size, which determines its potential as a game species, given that hunting is an important activity in the region. A total of 98 species representing 32 families (50.7% of the species known to occur in the region) were reported during interviews, being used for meat, pets, and medicinal purposes. Three species were used as zootherapeutics - White-naped Jay was eaten whole as a cure for speech problems, the feathers of Yellow-legged Tinamou were used for snakebite, Smooth-billed Ani was eaten for "chronic cough" and Small-billed Tinamou and Tataupa Tinamou used for locomotion problems. The preference of the informants for characteristics such as birdsong and colorful plumage was a significant determinant of their preference for the species exploited. Birds with cynegetic potential and high use values were also among the most preferred species. Despite the highly significant preferences for certain species, some birds, such as those of the families Trochilidae, Thamnophilidae, and Tyrannidae are hunted randomly, independently of their attributes. The evidence collected on the criteria applied by local specialists for the exploitation of the bird fauna permitted the identification of the species that suffer hunting pressure, providing guidelines for the development of conservation and management strategies that will guarantee the long-term survival of the populations of these bird species in the region.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Bearing Capacity Assessment on low Volume Roads
NASA Astrophysics Data System (ADS)
Zariņš, A.
2015-11-01
A large part of Latvian road network consists of low traffic volume roads and in particular of roads without hard pavement. Unbounded pavements shows serious problems in the form of rutting and other deformations, which finally lead to weak serviceability and damage of the road structure after intensive exploitation periods. Traditionally, these problems have been associated with heavy goods transport, overloaded vehicles and their impact. To find the specific damaging factors causing road pavement deformations and evaluate their prevention possibilities, and establish conditions that will allow doing it, the study was carried out. The tire pressure has been set as the main factor of load. Two different tire pressures have been used in tests and their impacts were compared. The comparison was done using deflection measurements with LWD together with dielectric constant measurements in a road structure using percometer. Measurements were taken in the upper pavement structure layers at different depths during full-scale loading and in different moisture/temperature conditions. Advisable load intensity and load factors for heavy traffic according to road conditions were set based on the study results.
Laser vibrometry exploitation for vehicle identification
NASA Astrophysics Data System (ADS)
Nolan, Adam; Lingg, Andrew; Goley, Steve; Sigmund, Kevin; Kangas, Scott
2014-06-01
Vibration signatures sensed from distant vehicles using laser vibrometry systems provide valuable information that may be used to help identify key vehicle features such as engine type, engine speed, and number of cylinders. Through the use of physics models of the vibration phenomenology, features are chosen to support classification algorithms. Various individual exploitation algorithms were developed using these models to classify vibration signatures into engine type (piston vs. turbine), engine configuration (Inline 4 vs. Inline 6 vs. V6 vs. V8 vs. V12) and vehicle type. The results of these algorithms will be presented for an 8 class problem. Finally, the benefits of using a factor graph representation to link these independent algorithms together will be presented which constructs a classification hierarchy for the vibration exploitation problem.
Hansen, Michael J.; Nate, Nancy A.
2014-01-01
We evaluated the dynamics of walleye Sander vitreus population size structure, as indexed by the proportional size distribution (PSD) of quality-length fish, in Escanaba Lake during 1967–2003 and in 204 other lakes in northern Wisconsin during 1990–2011. We estimated PSD from angler-caught walleyes in Escanaba Lake and from spring electrofishing in 204 other lakes, and then related PSD to annual estimates of recruitment to age-3, length at age 3, and annual angling exploitation rate. In Escanaba Lake during 1967–2003, annual estimates of PSD were highly dynamic, growth (positively) explained 35% of PSD variation, recruitment explained only 3% of PSD variation, and exploitation explained only 7% of PSD variation. In 204 other northern Wisconsin lakes during 1990–2011, PSD varied widely among lakes, recruitment (negatively) explained 29% of PSD variation, growth (positively) explained 21% of PSD variation, and exploitation explained only 4% of PSD variation. We conclude that population size structure was most strongly driven by recruitment and growth, rather than exploitation, in northern Wisconsin walleye populations. Studies of other species over wide spatial and temporal ranges of recruitment, growth, and mortality are needed to determine which dynamic rate most strongly influences population size structure of other species. Our findings indicate a need to be cautious about assuming exploitation is a strong driver of walleye population size structure.
Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that peoplemore » from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.« less
A space-frequency multiplicative regularization for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.
Tutorial for the structure elucidation of small molecules by means of the LSD software.
Nuzillard, Jean-Marc; Plainchont, Bertrand
2018-06-01
Automatic structure elucidation of small molecules by means of the "logic for structure elucidation" (LSD) software is introduced in the context of the automatic exploitation of chemical shift correlation data and with minimal input from chemical shift values. The first step in solving a structural problem by means of LSD is the extraction of pertinent data from the 1D and 2D spectra. This operation requires the labeling of the resonances and of their correlations; its reliability highly depends on the quality of the spectra. The combination of COSY, HSQC, and HMBC spectra results in proximity relationships between nonhydrogen atoms that are associated in order to build the possible solutions of a problem. A simple molecule, camphor, serves as an example for the writing of an LSD input file and to show how solution structures are obtained. An input file for LSD must contain a nonambiguous description of each atom, or atom status, which includes the chemical element symbol, the hybridization state, the number of bound hydrogen atoms and the formal electric charge. In case of atom status ambiguity, the pyLSD program performs clarification by systematically generating the status of the atoms. PyLSD also proposes the use of the nmrshiftdb algorithm in order to rank the solutions of a problem according to the quality of the fit between the experimental carbon-13 chemical shifts, and the ones predicted from the proposed structures. To conclude, some hints toward future uses and developments of computer-assisted structure elucidation by LSD are proposed. Copyright © 2017 John Wiley & Sons, Ltd.
Machine Learning Methods for Attack Detection in the Smart Grid.
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
2016-08-01
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
On the decentralized control of large-scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chong, C.
1973-01-01
The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.
Schilde, M; Doerner, K F; Hartl, R F
2014-10-01
In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches.
Lee, Norman; Ward, Jessica L; Vélez, Alejandro; Micheyl, Christophe; Bee, Mark A
2017-03-06
Noise is a ubiquitous source of errors in all forms of communication [1]. Noise-induced errors in speech communication, for example, make it difficult for humans to converse in noisy social settings, a challenge aptly named the "cocktail party problem" [2]. Many nonhuman animals also communicate acoustically in noisy social groups and thus face biologically analogous problems [3]. However, we know little about how the perceptual systems of receivers are evolutionarily adapted to avoid the costs of noise-induced errors in communication. In this study of Cope's gray treefrog (Hyla chrysoscelis; Hylidae), we investigated whether receivers exploit a potential statistical regularity present in noisy acoustic scenes to reduce errors in signal recognition and discrimination. We developed an anatomical/physiological model of the peripheral auditory system to show that temporal correlation in amplitude fluctuations across the frequency spectrum ("comodulation") [4-6] is a feature of the noise generated by large breeding choruses of sexually advertising males. In four psychophysical experiments, we investigated whether females exploit comodulation in background noise to mitigate noise-induced errors in evolutionarily critical mate-choice decisions. Subjects experienced fewer errors in recognizing conspecific calls and in selecting the calls of high-quality mates in the presence of simulated chorus noise that was comodulated. These data show unequivocally, and for the first time, that exploiting statistical regularities present in noisy acoustic scenes is an important biological strategy for solving cocktail-party-like problems in nonhuman animal communication. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI
NASA Astrophysics Data System (ADS)
Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.
2015-09-01
In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.
NASA Astrophysics Data System (ADS)
Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique
2017-05-01
Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.
NASA Astrophysics Data System (ADS)
Colombi, P.; Alessandri, I.; Bergese, P.; Federici, S.; Depero, L. E.
2009-08-01
In this paper, self-assembled polystyrene nanospheres are proposed as a shape characterizer sample for SPM tips. Ordered arrays or 2D islands of polystyrene spheres may be prepared either by sedimentation or by crystallization of the colloidal spheres' suspension. The self-assembling mechanism guarantees high reproducibility; thus the characterizer sample can be 'freshly' prepared at each use, avoiding the problem of time and use deterioration and reducing the problem of sample structure fidelity that occurs when lithographic structures are employed. The spheres could also be deposited on the sample itself in order to speed up the characterization process in applications requiring frequent tip characterizations. We present numerical calculations of geometrical convoluted profiles on the proposed structures showing that, for a variety of different tip shapes, at the border between a couple of touching spheres the tip flanks do not come into contact with the spheres. Due to this behaviour, touching spheres are an optimum characterizer sample for SPM tip curvature radius characterization, enabling a straightforward procedure for calculating the curvature radius from the amplitude of tip oscillation along profiles connecting spheres' centres. The new procedure for the characterization of SPM probes was assessed exploiting different kinds of self-assembled structures and comparing results to those obtained by spiked structures and SEM observations.
Simulated population responses of common carp to commercial exploitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Michael J.; Hennen, Matthew J.; Brown, Michael L.
2011-12-01
Common carp Cyprinus carpio is a widespread invasive species that can become highly abundant and impose deleterious ecosystem effects. Thus, aquatic resource managers are interested in controlling common carp populations. Control of invasive common carp populations is difficult, due in part to the inherent uncertainty of how populations respond to exploitation. To understand how common carp populations respond to exploitation, we evaluated common carp population dynamics (recruitment, growth, and mortality) in three natural lakes in eastern South Dakota. Common carp exhibited similar population dynamics across these three systems that were characterized by consistent recruitment (ages 3 to 15 years present),more » fast growth (K = 0.37 to 0.59), and low mortality (A = 1 to 7%). We then modeled the effects of commercial exploitation on size structure, abundance, and egg production to determine its utility as a management tool to control populations. All three populations responded similarly to exploitation simulations with a 575-mm length restriction, representing commercial gear selectivity. Simulated common carp size structure modestly declined (9 to 37%) in all simulations. Abundance of common carp declined dramatically (28 to 56%) at low levels of exploitation (0 to 20%) but exploitation >40% had little additive effect and populations were only reduced by 49 to 79% despite high exploitation (>90%). Maximum lifetime egg production was reduced from 77 to 89% at a moderate level of exploitation (40%), indicating the potential for recruitment overfishing. Exploitation further reduced common carp size structure, abundance, and egg production when simulations were not size selective. Our results provide insights to how common carp populations may respond to exploitation. Although commercial exploitation may be able to partially control populations, an integrated removal approach that removes all sizes of common carp has a greater chance of controlling population abundance and reducing perturbations induced by this invasive species.« less
Colonialism in Modern America: The Appalachian Case.
ERIC Educational Resources Information Center
Lewis, Helen Matthews, Ed.; And Others
The essays in this book illustrate a conceptual model for analyzing the social and economic problems of the Appalachian region. The model is variously called Colonialism, Internal Colonialism, Exploitation, or External Oppression. It highlights the process through which dominant outside industrial interests establish control, exploit the region,…
Numerical algebraic geometry for model selection and its application to the life sciences
Gross, Elizabeth; Davis, Brent; Ho, Kenneth L.; Bates, Daniel J.
2016-01-01
Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology. PMID:27733697
Vectorized program architectures for supercomputer-aided circuit design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzoli, V.; Ferlito, M.; Neri, A.
1986-01-01
Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less
Probabilistic Low-Rank Multitask Learning.
Kong, Yu; Shao, Ming; Li, Kang; Fu, Yun
2018-03-01
In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multitask learning (MTL) that can automatically balance between low-rank and sparsity constraints. The former assumes a low-rank structure of the underlying predictive hypothesis space to explicitly capture the relationship of different tasks and the latter learns the incoherent sparse patterns private to each task. We derive and perform inference via variational Bayesian methods. Experimental results on both regression and classification tasks on real-world applications demonstrate the effectiveness of the proposed method in dealing with the MTL problems.
The structure and formation of natural categories
NASA Technical Reports Server (NTRS)
Fisher, Douglas; Langley, Pat
1990-01-01
Categorization and concept formation are critical activities of intelligence. These processes and the conceptual structures that support them raise important issues at the interface of cognitive psychology and artificial intelligence. The work presumes that advances in these and other areas are best facilitated by research methodologies that reward interdisciplinary interaction. In particular, a computational model is described of concept formation and categorization that exploits a rational analysis of basic level effects by Gluck and Corter. Their work provides a clean prescription of human category preferences that is adapted to the task of concept learning. Also, their analysis was extended to account for typicality and fan effects, and speculate on how the concept formation strategies might be extended to other facets of intelligence, such as problem solving.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Anomalous Diffraction in Crystallographic Phase Evaluation
Hendrickson, Wayne A.
2014-01-01
X-ray diffraction patterns from crystals of biological macromolecules contain sufficient information to define atomic structures, but atomic positions are inextricable without having electron-density images. Diffraction measurements provide amplitudes, but the computation of electron density also requires phases for the diffracted waves. The resonance phenomenon known as anomalous scattering offers a powerful solution to this phase problem. Exploiting scattering resonances from diverse elements, the methods of multiwavelength anomalous diffraction (MAD) and single-wavelength anomalous diffraction (SAD) now predominate for de novo determinations of atomic-level biological structures. This review describes the physical underpinnings of anomalous diffraction methods, the evolution of these methods to their current maturity, the elements, procedures and instrumentation used for effective implementation, and the realm of applications. PMID:24726017
Efficient multitasking of Choleski matrix factorization on CRAY supercomputers
NASA Technical Reports Server (NTRS)
Overman, Andrea L.; Poole, Eugene L.
1991-01-01
A Choleski method is described and used to solve linear systems of equations that arise in large scale structural analysis. The method uses a novel variable-band storage scheme and is structured to exploit fast local memory caches while minimizing data access delays between main memory and vector registers. Several parallel implementations of this method are described for the CRAY-2 and CRAY Y-MP computers demonstrating the use of microtasking and autotasking directives. A portable parallel language, FORCE, is used for comparison with the microtasked and autotasked implementations. Results are presented comparing the matrix factorization times for three representative structural analysis problems from runs made in both dedicated and multi-user modes on both computers. CPU and wall clock timings are given for the parallel implementations and are compared to single processor timings of the same algorithm.
Synchronization-insensitive video watermarking using structured noise pattern
NASA Astrophysics Data System (ADS)
Setyawan, Iwan; Kakes, Geerd; Lagendijk, Reginald L.
2002-04-01
For most watermarking methods, preserving the synchronization between the watermark embedded in a digital data (image, audio or video) and the watermark detector is critical to the success of the watermark detection process. Many digital watermarking attacks exploit this fact by disturbing the synchronization of the watermark and the watermark detector, and thus disabling proper watermark detection without having to actually remove the watermark from the data. Some techniques have been proposed in the literature to deal with this problem. Most of these techniques employ methods to reverse the distortion caused by the attack and then try to detect the watermark from the repaired data. In this paper, we propose a watermarking technique that is not sensitive to synchronization. This technique uses a structured noise pattern and embeds the watermark payload into the geometrical structure of the embedded pattern.
Exploiting Glide Symmetry in Planar EBG Structures
NASA Astrophysics Data System (ADS)
Mouris, Boules A.; Quevedo-Teruel, Oscar; Thobaben, Ragnar
2018-02-01
Periodic structures such as electromagnetic band gap (EBG) structures can be used to prevent the propagation of electromagnetic waves within a certain frequency range known as the stop band. One of the main limitations of using EBG structures at low frequencies is their relatively large size. In this paper, we investigate the possibility of using glide symmetry in planar EBG structures to reduce their size. Simulated results demonstrate that exploiting glide symmetry in EBG structures can lead to size reduction.
Coevolving memetic algorithms: a review and progress report.
Smith, Jim E
2007-02-01
Coevolving memetic algorithms are a family of metaheuristic search algorithms in which a rule-based representation of local search (LS) is coadapted alongside candidate solutions within a hybrid evolutionary system. Simple versions of these systems have been shown to outperform other nonadaptive memetic and evolutionary algorithms on a range of problems. This paper presents a rationale for such systems and places them in the context of other recent work on adaptive memetic algorithms. It then proposes a general structure within which a population of LS algorithms can be evolved in tandem with the solutions to which they are applied. Previous research started with a simple self-adaptive system before moving on to more complex models. Results showed that the algorithm was able to discover and exploit certain forms of structure and regularities within the problems. This "metalearning" of problem features provided a means of creating highly scalable algorithms. This work is briefly reviewed to highlight some of the important findings and behaviors exhibited. Based on this analysis, new results are then presented from systems with more flexible representations, which, again, show significant improvements. Finally, the current state of, and future directions for, research in this area is discussed.
Diffusion, decolonializing, and participatory action research.
Woodward, William R; Hetley, Richard S
2007-03-01
Miki Takasuna describes knowledge transfer between elite communities of scientists, a process by which ideas become structurally transformed in the host culture. By contrast, a process that we have termed knowledge transfer by deelitization occurs when (a) participatory action researchers work with a community to identify a problem involving oppression or exploitation. Then (b) community members suggest solutions and acquire the tools of analysis and action to pursue social actions. (c) Disadvantaged persons thereby become more aware of their own abilities and resources, and persons with special expertise become more effective. (d) Rather than detachment and value neutrality, this joint process involves advocacy and structural transformation. In the examples of participatory action research documented here, Third World social scientists collaborated with indigenous populations to solve problems of literacy, community-building, land ownership, and political voice. Western social scientists, inspired by these non-Western scientists, then joined in promoting PAR both in the Third World and in Europe and the Americas, e.g., adapting it for solving problems of people with disabilities or disenfranchised women. Emancipatory goals such as these may even help North American psychologists to break free of some methodological chains and to bring about social and political change.
Genetic Algorithms for Multiple-Choice Problems
NASA Astrophysics Data System (ADS)
Aickelin, Uwe
2010-04-01
This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.
ERIC Educational Resources Information Center
Maries, Alexandru; Singh, Chandralekha
2018-01-01
Drawing appropriate diagrams is a useful problem solving heuristic that can transform a problem into a representation that is easier to exploit for solving it. One major focus while helping introductory physics students learn effective problem solving is to help them understand that drawing diagrams can facilitate problem solution. We conducted an…
What Does (and Doesn't) Make Analogical Problem Solving Easy? A Complexity-Theoretic Perspective
ERIC Educational Resources Information Center
Wareham, Todd; Evans, Patricia; van Rooij, Iris
2011-01-01
Solving new problems can be made easier if one can build on experiences with other problems one has already successfully solved. The ability to exploit earlier problem-solving experiences in solving new problems seems to require several cognitive sub-abilities. Minimally, one needs to be able to retrieve relevant knowledge of earlier solved…
Structure-Function Network Mapping and Its Assessment via Persistent Homology
2017-01-01
Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways. PMID:28046127
ATLAS, an integrated structural analysis and design system. Volume 1: ATLAS user's guide
NASA Technical Reports Server (NTRS)
Dreisbach, R. L. (Editor)
1979-01-01
Some of the many analytical capabilities provided by the ATLAS Version 4.0 System in the logical sequence are described in which model-definition data are prepared and the subsequent computer job is executed. The example data presented and the fundamental technical considerations that are highlighted can be used as guides during the problem solving process. This guide does not describe the details of the ATLAS capabilities, but provides an introduction to the new user of ATLAS to the level at which the complete array of capabilities described in the ATLAS User's Manual can be exploited fully.
The physician as perpetrator of abuse.
Kluft, R P
1993-06-01
Although the exploitation and abuse of patients is forbidden by every code of medical ethics, physicians are in a power position vis-a-vis their patients, and this power may be misused. The spectrum of abusive physician behaviors includes doctors functioning as agents of control, exploiting physicianly perogatives, acting out personal problems in the medical setting, allowing subversion of their judgment, deliberately delivering suboptimal care, dehumanizing care, and sexually exploiting patients. Guidelines for the treatment of patients with such prior experiences are offered.
Fraley, Hannah E; Aronowitz, Teri
2017-10-01
Human trafficking is a global problem; more than half of all victims are children. In the United States (US), at-risk youth continue to attend school. School nurses are on the frontlines, presenting a window of opportunity to identify and prevent exploitation. Available papers targeting school nurses report that school nurses may lack awareness of commercial sexual exploitation and may have attitudes and misperceptions about behaviors of school children at risk. This is a theoretical paper applying the Peace and Power Conceptual Model to understand the role of school nurses in commercial sexual exploitation of children.
Solutions to an advanced functional partial differential equation of the pantograph type
Zaidi, Ali A.; Van Brunt, B.; Wake, G. C.
2015-01-01
A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained. PMID:26345391
Solutions to an advanced functional partial differential equation of the pantograph type.
Zaidi, Ali A; Van Brunt, B; Wake, G C
2015-07-08
A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained.
Solving Constraint-Satisfaction Problems In Prolog Language
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1991-01-01
Technique for solution of constraint-satisfaction problems uses definite-clause grammars of Prolog computer language. Exploits fact that grammar-rule notation viewed as "state-change notation". Facilitates development of dynamic representation performing informed as well as blind searches. Applicable to design, scheduling, and planning problems.
Chemical Approaches for Structure and Function of RNA in Postgenomic Era
Ro-Choi, Tae Suk; Choi, Yong Chun
2012-01-01
In the study of cellular RNA chemistry, a major thrust of research focused upon sequence determinations for decades. Structures of snRNAs (4.5S RNA I (Alu), U1, U2, U3, U4, U5, and U6) were determined at Baylor College of Medicine, Houston, Tex, in an earlier time of pregenomic era. They show novel modifications including base methylation, sugar methylation, 5′-cap structures (types 0–III) and sequence heterogeneity. This work offered an exciting problem of posttranscriptional modification and underwent numerous significant advances through technological revolutions during pregenomic, genomic, and postgenomic eras. Presently, snRNA research is making progresses involved in enzymology of snRNA modifications, molecular evolution, mechanism of spliceosome assembly, chemical mechanism of intron removal, high-order structure of snRNA in spliceosome, and pathology of splicing. These works are destined to reach final pathway of work “Function and Structure of Spliceosome” in addition to exciting new exploitation of other noncoding RNAs in all aspects of regulatory functions. PMID:22347623
Dealing with Students' Plagiarism Pre-Emptively through Teaching Proper Information Exploitation
ERIC Educational Resources Information Center
Chankova, Mariya
2017-01-01
The present contribution looks into the much discussed issue of student plagiarism, which is conjectured to stem from problems with information searching and exploitation, underdeveloped exposition skills and difficulty in using sources, especially concerning quotations and references. The aim of the study is to determine how effective pre-emptive…
Little Adults: Child and Teenage Commercial Sexual Exploitation in Contemporary Brazilian Cinema
ERIC Educational Resources Information Center
da Silvia, Antonio Marcio
2016-01-01
This current study explores three contemporary Brazilian films' depiction of commercial sexual exploitation of young girls and teenagers. It points out how the young female characters cope with the abuses they suffer and proposes that these filmic representations of the characters' experiences expose a significant social problem of contemporary…
NASA Astrophysics Data System (ADS)
Zhao, Hui; Qiu, Weiting; Qu, Weilu
2018-02-01
The unpromising situation of terrestrial oil resources makes the deep-sea oil industry become an important development strategy. The South China Sea has a vast sea area with a wide distribution of oil and gas resources, but there is a phenomenon that exploration and census rates and oil exploitation are low. In order to solve the above problems, this article analyzes the geology, oil and gas exploration and exploration equipment in the South China Sea and the Gulf of Mexico. Comparing the political environment of China and the United States energy industry and the economic environment of oil companies, this article points out China’s deep-sea oil exploration and mining problems that may exist. Finally, the feasibility of oil exploration and exploitation in the South China Sea is put forward, which will provide reference to improve the conditions of oil exploration in the South China Sea and promoting the stable development of China’s oil industry.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
Schilde, M.; Doerner, K.F.; Hartl, R.F.
2014-01-01
In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches. PMID:25844013
Shen, Rong; Han, Wei; Fiorin, Giacomo; Islam, Shahidul M; Schulten, Klaus; Roux, Benoît
2015-10-01
The knowledge of multiple conformational states is a prerequisite to understand the function of membrane transport proteins. Unfortunately, the determination of detailed atomic structures for all these functionally important conformational states with conventional high-resolution approaches is often difficult and unsuccessful. In some cases, biophysical and biochemical approaches can provide important complementary structural information that can be exploited with the help of advanced computational methods to derive structural models of specific conformational states. In particular, functional and spectroscopic measurements in combination with site-directed mutations constitute one important source of information to obtain these mixed-resolution structural models. A very common problem with this strategy, however, is the difficulty to simultaneously integrate all the information from multiple independent experiments involving different mutations or chemical labels to derive a unique structural model consistent with the data. To resolve this issue, a novel restrained molecular dynamics structural refinement method is developed to simultaneously incorporate multiple experimentally determined constraints (e.g., engineered metal bridges or spin-labels), each treated as an individual molecular fragment with all atomic details. The internal structure of each of the molecular fragments is treated realistically, while there is no interaction between different molecular fragments to avoid unphysical steric clashes. The information from all the molecular fragments is exploited simultaneously to constrain the backbone to refine a three-dimensional model of the conformational state of the protein. The method is illustrated by refining the structure of the voltage-sensing domain (VSD) of the Kv1.2 potassium channel in the resting state and by exploring the distance histograms between spin-labels attached to T4 lysozyme. The resulting VSD structures are in good agreement with the consensus model of the resting state VSD and the spin-spin distance histograms from ESR/DEER experiments on T4 lysozyme are accurately reproduced.
Action-based language: a theory of language acquisition, comprehension, and production.
Glenberg, Arthur M; Gallese, Vittorio
2012-07-01
Evolution and the brain have done a marvelous job solving many tricky problems in action control, including problems of learning, hierarchical control over serial behavior, continuous recalibration, and fluency in the face of slow feedback. Given that evolution tends to be conservative, it should not be surprising that these solutions are exploited to solve other tricky problems, such as the design of a communication system. We propose that a mechanism of motor control, paired controller/predictor models, has been exploited for language learning, comprehension, and production. Our account addresses the development of grammatical regularities and perspective, as well as how linguistic symbols become meaningful through grounding in perception, action, and emotional systems. Copyright © 2011 Elsevier Srl. All rights reserved.
Improving immunization of programmable logic controllers using weighted median filters.
Paredes, José L; Díaz, Dhionel
2005-04-01
This paper addresses the problem of improving immunization of programmable logic controllers (PLC's) to electromagnetic interference with impulsive characteristics. A filtering structure, based on weighted median filters, that does not require additional hardware and can be implemented in legacy PLC's is proposed. The filtering operation is implemented in the binary domain and removes the impulsive noise presented in the discrete input adding thus robustness to PLC's. By modifying the sampling clock structure, two variants of the filter are obtained. Both structures exploit the cyclic nature of the PLC to form an N-sample observation window of the discrete input, hence a status change on it is determined by the filter output taking into account all the N samples avoiding thus that a single impulse affects the PLC functionality. A comparative study, based on a statistical analysis, of the different filters' performances is presented.
Global/local stress analysis of composite structures. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
1989-01-01
A method for performing a global/local stress analysis is described and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.
Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models
Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir
2016-01-01
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.
1990-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.
1992-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
Multiscale Structure of UXO Site Characterization: Spatial Estimation and Uncertainty Quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrouchov, George; Doll, William E.; Beard, Les P.
2009-01-01
Unexploded ordnance (UXO) site characterization must consider both how the contamination is generated and how we observe that contamination. Within the generation and observation processes, dependence structures can be exploited at multiple scales. We describe a conceptual site characterization process, the dependence structures available at several scales, and consider their statistical estimation aspects. It is evident that most of the statistical methods that are needed to address the estimation problems are known but their application-specific implementation may not be available. We demonstrate estimation at one scale and propose a representation for site contamination intensity that takes full account of uncertainty,more » is flexible enough to answer regulatory requirements, and is a practical tool for managing detailed spatial site characterization and remediation. The representation is based on point process spatial estimation methods that require modern computational resources for practical application. These methods have provisions for including prior and covariate information.« less
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
2013-01-01
Background Translating knowledge from research into clinical practice has emerged as a practice of increasing importance. This has led to the creation of new organizational entities designed to bridge knowledge between research and practice. Within the UK, the Collaborations for Leadership in Applied Health Research and Care (CLAHRC) have been introduced to ensure that emphasis is placed in ensuring research is more effectively translated and implemented in clinical practice. Knowledge translation (KT) can be accomplished in various ways and is affected by the structures, activities, and coordination practices of organizations. We draw on concepts in the innovation literature—namely exploration, exploitation, and ambidexterity—to examine these structures and activities as well as the ensuing tensions between research and implementation. Methods Using a qualitative research approach, the study was based on 106 semi-structured, in-depth interviews with the directors, theme leads and managers, key professionals involved in research and implementation in nine CLAHRCs. Data was also collected from intensive focus group workshops. Results In this article we develop five archetypes for organizing KT. The results show how the various CLAHRC entities work through partnerships to create explorative research and deliver exploitative implementation. The different archetypes highlight a range of structures that can achieve ambidextrous balance as they organize activity and coordinate practice on a continuum of exploration and exploitation. Conclusion This work suggests that KT entities aim to reach their goals through a balance between exploration and exploitation in the support of generating new research and ensuring knowledge implementation. We highlight different organizational archetypes that support various ways to maintain ambidexterity, where both exploration and exploitation are supported in an attempt to narrow the knowledge gaps. The KT entity archetypes offer insights on strategies in structuring collaboration to facilitate an effective balance of exploration and exploitation learning in the KT process. PMID:24007259
Oborn, Eivor; Barrett, Michael; Prince, Karl; Racko, Girts
2013-09-05
Translating knowledge from research into clinical practice has emerged as a practice of increasing importance. This has led to the creation of new organizational entities designed to bridge knowledge between research and practice. Within the UK, the Collaborations for Leadership in Applied Health Research and Care (CLAHRC) have been introduced to ensure that emphasis is placed in ensuring research is more effectively translated and implemented in clinical practice. Knowledge translation (KT) can be accomplished in various ways and is affected by the structures, activities, and coordination practices of organizations. We draw on concepts in the innovation literature--namely exploration, exploitation, and ambidexterity--to examine these structures and activities as well as the ensuing tensions between research and implementation. Using a qualitative research approach, the study was based on 106 semi-structured, in-depth interviews with the directors, theme leads and managers, key professionals involved in research and implementation in nine CLAHRCs. Data was also collected from intensive focus group workshops. In this article we develop five archetypes for organizing KT. The results show how the various CLAHRC entities work through partnerships to create explorative research and deliver exploitative implementation. The different archetypes highlight a range of structures that can achieve ambidextrous balance as they organize activity and coordinate practice on a continuum of exploration and exploitation. This work suggests that KT entities aim to reach their goals through a balance between exploration and exploitation in the support of generating new research and ensuring knowledge implementation. We highlight different organizational archetypes that support various ways to maintain ambidexterity, where both exploration and exploitation are supported in an attempt to narrow the knowledge gaps. The KT entity archetypes offer insights on strategies in structuring collaboration to facilitate an effective balance of exploration and exploitation learning in the KT process.
Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.
Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian
2017-01-01
Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.
Perchance to Dream: Pathology, Pharmacology, and Politics in a 24-Hour Economy.
Brassington, Iain
2018-04-01
The lack of sleep is a significant problem in the modern world. The structure of the economy means that 24 hour working is required from some of us, sometimes because we are expected to be able to respond to share-price fluctuations on the other side of the planet, sometimes because we are expected to serve kebabs to people leaving nightclubs, and sometimes because lives depend on it. The immediate effect is that we feel groggy; but there may be much more sinister long-term effects of persistent sleep deprivation and disruption, the evidence for which is significant, and worth taking seriously. If sleeplessness has a serious impact on health, it represents a notable public health problem. In this article, I sketch that problem, and look at how exploiting the pharmacopoeia (or a possible future pharmacopoeia) might allow us to tackle it. I also suggest that using drugs to mitigate or militate against sleeplessness is potentially morally and politically fraught, with implications for social justice. Hence, whatever reasons we have to use drugs to deal with the problems of sleeplessness, we ought to be careful.
Optimal design of earth-moving machine elements with cusp catastrophe theory application
NASA Astrophysics Data System (ADS)
Pitukhin, A. V.; Skobtsov, I. G.
2017-10-01
This paper deals with the optimal design problem solution for the operator of an earth-moving machine with a roll-over protective structure (ROPS) in terms of the catastrophe theory. A brief description of the catastrophe theory is presented, the cusp catastrophe is considered, control parameters are viewed as Gaussian stochastic quantities in the first part of the paper. The statement of optimal design problem is given in the second part of the paper. It includes the choice of the objective function and independent design variables, establishment of system limits. The objective function is determined as mean total cost that includes initial cost and cost of failure according to the cusp catastrophe probability. Algorithm of random search method with an interval reduction subject to side and functional constraints is given in the last part of the paper. The way of optimal design problem solution can be applied to choose rational ROPS parameters, which will increase safety and reduce production and exploitation expenses.
Multitriangulations, pseudotriangulations and some problems of realization of polytopes
NASA Astrophysics Data System (ADS)
Pilaud, Vincent
2010-09-01
This thesis explores two specific topics of discrete geometry, the multitriangulations and the polytopal realizations of products, whose connection is the problem of finding polytopal realizations of a given combinatorial structure. A k-triangulation is a maximal set of chords of the convex n-gon such that no k+1 of them mutually cross. We propose a combinatorial and geometric study of multitriangulations based on their stars, which play the same role as triangles of triangulations. This study leads to interpret multitriangulations by duality as pseudoline arrangements with contact points covering a given support. We exploit finally these results to discuss some open problems on multitriangulations, in particular the question of the polytopal realization of their flip graphs. We study secondly the polytopality of Cartesian products. We investigate the existence of polytopal realizations of cartesian products of graphs, and we study the minimal dimension that can have a polytope whose k-skeleton is that of a product of simplices.
Generalized SMO algorithm for SVM-based multitask learning.
Cai, Feng; Cherkassky, Vladimir
2012-06-01
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
Exploiting Lipid Permutation Symmetry to Compute Membrane Remodeling Free Energies.
Bubnis, Greg; Risselada, Herre Jelger; Grubmüller, Helmut
2016-10-28
A complete physical description of membrane remodeling processes, such as fusion or fission, requires knowledge of the underlying free energy landscapes, particularly in barrier regions involving collective shape changes, topological transitions, and high curvature, where Canham-Helfrich (CH) continuum descriptions may fail. To calculate these free energies using atomistic simulations, one must address not only the sampling problem due to high free energy barriers, but also an orthogonal sampling problem of combinatorial complexity stemming from the permutation symmetry of identical lipids. Here, we solve the combinatorial problem with a permutation reduction scheme to map a structural ensemble into a compact, nondegenerate subregion of configuration space, thereby permitting straightforward free energy calculations via umbrella sampling. We applied this approach, using a coarse-grained lipid model, to test the CH description of bending and found sharp increases in the bending modulus for curvature radii below 10 nm. These deviations suggest that an anharmonic bending term may be required for CH models to give quantitative energetics of highly curved states.
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Nagarajaiah, Satish
2016-06-01
Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.
Spreng, R. Nathan; Cassidy, Benjamin N; Darboh, Bri S; DuPre, Elizabeth; Lockrow, Amber W; Setton, Roni; Turner, Gary R
2017-01-01
Abstract Background Age-related brain changes leading to altered socioemotional functioning may increase vulnerability to financial exploitation. If confirmed, this would suggest a novel mechanism leading to heightened financial exploitation risk in older adults. Development of predictive neural markers could facilitate increased vigilance and prevention. In this preliminary study, we sought to identify structural and functional brain differences associated with financial exploitation in older adults. Methods Financially exploited older adults (n = 13, 7 female) and a matched cohort of older adults who had been exposed to, but avoided, a potentially exploitative situation (n = 13, 7 female) were evaluated. Using magnetic resonance imaging, we examined cortical thickness and resting state functional connectivity. Behavioral data were collected using standardized cognitive assessments, self-report measures of mood and social functioning. Results The exploited group showed cortical thinning in anterior insula and posterior superior temporal cortices, regions associated with processing affective and social information, respectively. Functional connectivity encompassing these regions, within default and salience networks, was reduced, while between network connectivity was increased. Self-reported anger and hostility was higher for the exploited group. Conclusions We observed financial exploitation associated with brain differences in regions involved in socioemotional functioning. These exploratory and preliminary findings suggest that alterations in brain regions implicated in socioemotional functioning may be a marker of financial exploitation risk. Large-scale, prospective studies are necessary to validate this neural mechanism, and develop predictive markers for use in clinical practice. PMID:28369260
NASA Astrophysics Data System (ADS)
Danczyk, Jennifer; Wollocko, Arthur; Farry, Michael; Voshell, Martin
2016-05-01
Data collection processes supporting Intelligence, Surveillance, and Reconnaissance (ISR) missions have recently undergone a technological transition accomplished by investment in sensor platforms. Various agencies have made these investments to increase the resolution, duration, and quality of data collection, to provide more relevant and recent data to warfighters. However, while sensor improvements have increased the volume of high-resolution data, they often fail to improve situational awareness and actionable intelligence for the warfighter because it lacks efficient Processing, Exploitation, and Dissemination and filtering methods for mission-relevant information needs. The volume of collected ISR data often overwhelms manual and automated processes in modern analysis enterprises, resulting in underexploited data, insufficient, or lack of answers to information requests. The outcome is a significant breakdown in the analytical workflow. To cope with this data overload, many intelligence organizations have sought to re-organize their general staffing requirements and workflows to enhance team communication and coordination, with hopes of exploiting as much high-value data as possible and understanding the value of actionable intelligence well before its relevance has passed. Through this effort we have taken a scholarly approach to this problem by studying the evolution of Processing, Exploitation, and Dissemination, with a specific focus on the Army's most recent evolutions using the Functional Resonance Analysis Method. This method investigates socio-technical processes by analyzing their intended functions and aspects to determine performance variabilities. Gaps are identified and recommendations about force structure and future R and D priorities to increase the throughput of the intelligence enterprise are discussed.
Exploiting Elementary Landscapes for TSP, Vehicle Routing and Scheduling
2015-09-03
Traveling Salesman Problem (TSP) and Graph Coloring are elementary. Problems such as MAX-kSAT are a superposition of k elementary landscapes. This...search space. Problems such as the Traveling Salesman Problem (TSP), Graph Coloring, the Frequency Assignment Problem , as well as Min-Cut and Max-Cut...echoing our earlier esults on the Traveling Salesman Problem . Using two locally optimal solutions as “parent” solutions, we have developed a
Homogenization models for 2-D grid structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Cioranescu, D.; Rebnord, D. A.
1992-01-01
In the past several years, we have pursued efforts related to the development of accurate models for the dynamics of flexible structures made of composite materials. Rather than viewing periodicity and sparseness as obstacles to be overcome, we exploit them to our advantage. We consider a variational problem on a domain that has large, periodically distributed holes. Using homogenization techniques we show that the solution to this problem is in some topology 'close' to the solution of a similar problem that holds on a much simpler domain. We study the behavior of the solution of the variational problem as the holes increase in number, but decrease in size in such a way that the total amount of material remains constant. The result is an equation that is in general more complex, but with a domain that is simply connected rather than perforated. We study the limit of the solution as the amount of material goes to zero. This second limit will, in most cases, retrieve much of the simplicity that was lost in the first limit without sacrificing the simplicity of the domain. Finally, we show that these results can be applied to the case of a vibrating Love-Kirchhoff plate with Kelvin-Voigt damping. We rely heavily on earlier results of (Du), (CS) for the static, undamped Love-Kirchhoff equation. Our efforts here result in a modification of those results to include both time dependence and Kelvin-Voigt damping.
ERIC Educational Resources Information Center
O'Callaghan, Paul; McMullen, John; Shannon, Ciaran; Rafferty, Harry; Black, Alastair
2013-01-01
Objective: To assess the efficacy of trauma-focused cognitive behavioral therapy (TF-CBT) delivered by nonclinical facilitators in reducing posttraumatic stress, depression, and anxiety and conduct problems and increasing prosocial behavior in a group of war-affected, sexually exploited girls in a single-blind, parallel-design, randomized,…
Structural Identifiability of Dynamic Systems Biology Models
Villaverde, Alejandro F.
2016-01-01
A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726
NASA Astrophysics Data System (ADS)
Li, Dongni; Guo, Rongtao; Zhan, Rongxin; Yin, Yong
2018-06-01
In this article, an innovative artificial bee colony (IABC) algorithm is proposed, which incorporates two mechanisms. On the one hand, to provide the evolutionary process with a higher starting level, genetic programming (GP) is used to generate heuristic rules by exploiting the elements that constitute the problem. On the other hand, to achieve a better balance between exploration and exploitation, a leading mechanism is proposed to attract individuals towards a promising region. To evaluate the performance of IABC in solving practical and complex problems, it is applied to the intercell scheduling problem with limited transportation capacity. It is observed that the GP-generated rules incorporate the elements of the most competing human-designed rules, and they are more effective than the human-designed ones. Regarding the leading mechanism, the strategies of the ageing leader and multiple challengers make the algorithm less likely to be trapped in local optima.
Robust automatic line scratch detection in films.
Newson, Alasdair; Almansa, Andrés; Gousseau, Yann; Pérez, Patrick
2014-03-01
Line scratch detection in old films is a particularly challenging problem due to the variable spatiotemporal characteristics of this defect. Some of the main problems include sensitivity to noise and texture, and false detections due to thin vertical structures belonging to the scene. We propose a robust and automatic algorithm for frame-by-frame line scratch detection in old films, as well as a temporal algorithm for the filtering of false detections. In the frame-by-frame algorithm, we relax some of the hypotheses used in previous algorithms in order to detect a wider variety of scratches. This step's robustness and lack of external parameters is ensured by the combined use of an a contrario methodology and local statistical estimation. In this manner, over-detection in textured or cluttered areas is greatly reduced. The temporal filtering algorithm eliminates false detections due to thin vertical structures by exploiting the coherence of their motion with that of the underlying scene. Experiments demonstrate the ability of the resulting detection procedure to deal with difficult situations, in particular in the presence of noise, texture, and slanted or partial scratches. Comparisons show significant advantages over previous work.
Hierarchical Matching and Regression with Application to Photometric Redshift Estimation
NASA Astrophysics Data System (ADS)
Murtagh, Fionn
2017-06-01
This work emphasizes that heterogeneity, diversity, discontinuity, and discreteness in data is to be exploited in classification and regression problems. A global a priori model may not be desirable. For data analytics in cosmology, this is motivated by the variety of cosmological objects such as elliptical, spiral, active, and merging galaxies at a wide range of redshifts. Our aim is matching and similarity-based analytics that takes account of discrete relationships in the data. The information structure of the data is represented by a hierarchy or tree where the branch structure, rather than just the proximity, is important. The representation is related to p-adic number theory. The clustering or binning of the data values, related to the precision of the measurements, has a central role in this methodology. If used for regression, our approach is a method of cluster-wise regression, generalizing nearest neighbour regression. Both to exemplify this analytics approach, and to demonstrate computational benefits, we address the well-known photometric redshift or `photo-z' problem, seeking to match Sloan Digital Sky Survey (SDSS) spectroscopic and photometric redshifts.
Contact replacement for NMR resonance assignment.
Xiong, Fei; Pandurangan, Gopal; Bailey-Kellogg, Chris
2008-07-01
Complementing its traditional role in structural studies of proteins, nuclear magnetic resonance (NMR) spectroscopy is playing an increasingly important role in functional studies. NMR dynamics experiments characterize motions involved in target recognition, ligand binding, etc., while NMR chemical shift perturbation experiments identify and localize protein-protein and protein-ligand interactions. The key bottleneck in these studies is to determine the backbone resonance assignment, which allows spectral peaks to be mapped to specific atoms. This article develops a novel approach to address that bottleneck, exploiting an available X-ray structure or homology model to assign the entire backbone from a set of relatively fast and cheap NMR experiments. We formulate contact replacement for resonance assignment as the problem of computing correspondences between a contact graph representing the structure and an NMR graph representing the data; the NMR graph is a significantly corrupted, ambiguous version of the contact graph. We first show that by combining connectivity and amino acid type information, and exploiting the random structure of the noise, one can provably determine unique correspondences in polynomial time with high probability, even in the presence of significant noise (a constant number of noisy edges per vertex). We then detail an efficient randomized algorithm and show that, over a variety of experimental and synthetic datasets, it is robust to typical levels of structural variation (1-2 AA), noise (250-600%) and missings (10-40%). Our algorithm achieves very good overall assignment accuracy, above 80% in alpha-helices, 70% in beta-sheets and 60% in loop regions. Our contact replacement algorithm is implemented in platform-independent Python code. The software can be freely obtained for academic use by request from the authors.
Xu, Yan; Wang, Yining; Sun, Jian-Tao; Zhang, Jianwen; Tsujii, Junichi; Chang, Eric
2013-01-01
To build large collections of medical terms from semi-structured information sources (e.g. tables, lists, etc.) and encyclopedia sites on the web. The terms are classified into the three semantic categories, Medical Problems, Medications, and Medical Tests, which were used in i2b2 challenge tasks. We developed two systems, one for Chinese and another for English terms. The two systems share the same methodology and use the same software with minimum language dependent parts. We produced large collections of terms by exploiting billions of semi-structured information sources and encyclopedia sites on the Web. The standard performance metric of recall (R) is extended to three different types of Recall to take the surface variability of terms into consideration. They are Surface Recall (), Object Recall (), and Surface Head recall (). We use two test sets for Chinese. For English, we use a collection of terms in the 2010 i2b2 text. Two collections of terms, one for English and the other for Chinese, have been created. The terms in these collections are classified as either of Medical Problems, Medications, or Medical Tests in the i2b2 challenge tasks. The English collection contains 49,249 (Problems), 89,591 (Medications) and 25,107 (Tests) terms, while the Chinese one contains 66,780 (Problems), 101,025 (Medications), and 15,032 (Tests) terms. The proposed method of constructing a large collection of medical terms is both efficient and effective, and, most of all, independent of language. The collections will be made publicly available. PMID:23874426
Xu, Yan; Wang, Yining; Sun, Jian-Tao; Zhang, Jianwen; Tsujii, Junichi; Chang, Eric
2013-01-01
To build large collections of medical terms from semi-structured information sources (e.g. tables, lists, etc.) and encyclopedia sites on the web. The terms are classified into the three semantic categories, Medical Problems, Medications, and Medical Tests, which were used in i2b2 challenge tasks. We developed two systems, one for Chinese and another for English terms. The two systems share the same methodology and use the same software with minimum language dependent parts. We produced large collections of terms by exploiting billions of semi-structured information sources and encyclopedia sites on the Web. The standard performance metric of recall (R) is extended to three different types of Recall to take the surface variability of terms into consideration. They are Surface Recall (R(S)), Object Recall (R(O)), and Surface Head recall (R(H)). We use two test sets for Chinese. For English, we use a collection of terms in the 2010 i2b2 text. Two collections of terms, one for English and the other for Chinese, have been created. The terms in these collections are classified as either of Medical Problems, Medications, or Medical Tests in the i2b2 challenge tasks. The English collection contains 49,249 (Problems), 89,591 (Medications) and 25,107 (Tests) terms, while the Chinese one contains 66,780 (Problems), 101,025 (Medications), and 15,032 (Tests) terms. The proposed method of constructing a large collection of medical terms is both efficient and effective, and, most of all, independent of language. The collections will be made publicly available.
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
Multi-hazard risk assessment applied to hydraulic fracturing operations
NASA Astrophysics Data System (ADS)
Garcia-Aristizabal, Alexander; Gasparini, Paolo; Russo, Raffaella; Capuano, Paolo
2017-04-01
Without exception, the exploitation of any energy resource produces impacts and intrinsically bears risks. Therefore, to make sound decisions about future energy resource exploitation, it is important to clearly understand the potential environmental impacts in the full life-cycle of an energy development project, distinguishing between the specific impacts intrinsically related to exploiting a given energy resource and those shared with the exploitation of other energy resources. Technological advances as directional drilling and hydraulic fracturing have led to a rapid expansion of unconventional resources (UR) exploration and exploitation; as a consequence, both public health and environmental concerns have risen. The main objective of a multi-hazard risk assessment applied to the development of UR is to assess the rate (or the likelihood) of occurrence of incidents and the relative potential impacts on surrounding environment, considering different hazards and their interactions. Such analyses have to be performed considering the different stages of development of a project; however, the discussion in this paper is mainly focused on the analysis applied to the hydraulic fracturing stage of a UR development project. The multi-hazard risk assessment applied to the development of UR poses a number of challenges, making of this one a particularly complex problem. First, a number of external hazards might be considered as potential triggering mechanisms. Such hazards can be either of natural origin or anthropogenic events caused by the same industrial activities. Second, failures might propagate through the industrial elements, leading to complex scenarios according to the layout of the industrial site. Third, there is a number of potential risk receptors, ranging from environmental elements (as the air, soil, surface water, or groundwater) to local communities and ecosystems. The multi-hazard risk approach for this problem is set by considering multiple hazards (and their possible interactions) as possible sources of system's perturbation that might drive to the development of an incidental event. Given the complexity of the problem, we adopt a multi-level approach: first, perform a qualitative analysis oriented to the identification of a wide range of possible scenarios; this process is based on a review of potential impacts in different risk receptors reported in literature, which is condensed in a number of causal diagrams created for different stages of a UR development project. Second, the most important scenarios for quantitative multi-hazard risk analyses are selected for further quantification. This selection is based on the identification of major risks, i.e., those related with the occurrence of low probability/high impact extreme events. The general framework for the quantitative multi-hazard risk analysis is represented using a so-called bow-tie structure. It is composed of a fault tree on the left hand side of the graphic plot, identifying the possible events causing the critical (or top) event, and an event tree on the right-hand side showing the possible consequences of the critical event. This work was supported under SHEER: "Shale Gas Exploration and Exploitation Induced Risks" project n.640896, funded from Horizon 2020 - R&I Framework Programme, call H2020-LCE-2014-1
NASA Astrophysics Data System (ADS)
Zhang, Guang-Ming; Harvey, David M.
2012-03-01
Various signal processing techniques have been used for the enhancement of defect detection and defect characterisation. Cross-correlation, filtering, autoregressive analysis, deconvolution, neural network, wavelet transform and sparse signal representations have all been applied in attempts to analyse ultrasonic signals. In ultrasonic nondestructive evaluation (NDE) applications, a large number of materials have multilayered structures. NDE of multilayered structures leads to some specific problems, such as penetration, echo overlap, high attenuation and low signal-to-noise ratio. The signals recorded from a multilayered structure are a class of very special signals comprised of limited echoes. Such signals can be assumed to have a sparse representation in a proper signal dictionary. Recently, a number of digital signal processing techniques have been developed by exploiting the sparse constraint. This paper presents a review of research to date, showing the up-to-date developments of signal processing techniques made in ultrasonic NDE. A few typical ultrasonic signal processing techniques used for NDE of multilayered structures are elaborated. The practical applications and limitations of different signal processing methods in ultrasonic NDE of multilayered structures are analysed.
Computational investigation of large-scale vortex interaction with flexible bodies
NASA Astrophysics Data System (ADS)
Connell, Benjamin; Yue, Dick K. P.
2003-11-01
The interaction of large-scale vortices with flexible bodies is examined with particular interest paid to the energy and momentum budgets of the system. Finite difference direct numerical simulation of the Navier-Stokes equations on a moving curvilinear grid is coupled with a finite difference structural solver of both a linear membrane under tension and linear Euler-Bernoulli beam. The hydrodynamics and structural dynamics are solved simultaneously using an iterative procedure with the external structural forcing calculated from the hydrodynamics at the surface and the flow-field velocity boundary condition given by the structural motion. We focus on an investigation into the canonical problem of a vortex-dipole impinging on a flexible membrane. It is discovered that the structural properties of the membrane direct the interaction in terms of the flow evolution and the energy budget. Pressure gradients associated with resonant membrane response are shown to sustain the oscillatory motion of the vortex pair. Understanding how the key mechanisms in vortex-body interactions are guided by the structural properties of the body is a prerequisite to exploiting these mechanisms.
Gezinski, Lindsay B; Karandikar, Sharvari; Levitt, Alexis; Ghaffarian, Roxanne
2017-01-01
The purpose of this research study was to conduct a content analysis of commercial surrogacy websites to explore how surrogacy is marketed to intended parents. The researchers developed a template to code website data, and a total of 345 website pages were reviewed. Websites depicted surrogacy as a solution to a problem, privileged genetic parenthood, ignored the potential for exploitation, dismissed surrogates' capacity to bond with the fetuses they carry, emphasized that surrogacy arrangements are mutually beneficial, ignored structural inequalities, and depicted surrogates as conforming to strict gender roles. These framings introduce vulnerabilities to both intended parents and surrogate mothers.
Neutrons for biologists: a beginner's guide, or why you should consider using neutrons.
Lakey, Jeremy H
2009-10-06
From the structures of isolated protein complexes to the molecular dynamics of whole cells, neutron methods can achieve a resolution in complex systems that is inaccessible to other techniques. Biology is fortunate in that it is rich in water and hydrogen, and this allows us to exploit the differential sensitivity of neutrons to this element and its major isotope, deuterium. Furthermore, neutrons exhibit wave properties that allow us to use them in similar ways to light, X-rays and electrons. This review aims to explain the basics of biological neutron science to encourage its greater use in solving difficult problems in the life sciences.
Neutrons for biologists: a beginner's guide, or why you should consider using neutrons
Lakey, Jeremy H.
2009-01-01
From the structures of isolated protein complexes to the molecular dynamics of whole cells, neutron methods can achieve a resolution in complex systems that is inaccessible to other techniques. Biology is fortunate in that it is rich in water and hydrogen, and this allows us to exploit the differential sensitivity of neutrons to this element and its major isotope, deuterium. Furthermore, neutrons exhibit wave properties that allow us to use them in similar ways to light, X-rays and electrons. This review aims to explain the basics of biological neutron science to encourage its greater use in solving difficult problems in the life sciences. PMID:19656821
Simple methods of exploiting the underlying structure of rule-based systems
NASA Technical Reports Server (NTRS)
Hendler, James
1986-01-01
Much recent work in the field of expert systems research has aimed at exploiting the underlying structures of the rule base for reasons of analysis. Such techniques as Petri-nets and GAGs have been proposed as representational structures that will allow complete analysis. Much has been made of proving isomorphisms between the rule bases and the mechanisms, and in examining the theoretical power of this analysis. In this paper we describe some early work in a new system which has much simpler (and thus, one hopes, more easily achieved) aims and less formality. The technique being examined is a very simple one: OPS5 programs are analyzed in a purely syntactic way and a FSA description is generated. In this paper we describe the technique and some user interface tools which exploit this structure.
Active Subspace Methods for Data-Intensive Inverse Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi
2017-04-27
The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.
ERIC Educational Resources Information Center
Barnitz, Laura
2001-01-01
Discusses the international problem of commercial sexual exploitation of children (CSEC) and efforts to stop the practice and assist the victims. Considers initiatives to formulate a worldwide policy against CSEC, and anti-CSEC efforts in the United States, including law enforcement and education, and advocacy efforts and services for youth.…
Spreng, R Nathan; Cassidy, Benjamin N; Darboh, Bri S; DuPre, Elizabeth; Lockrow, Amber W; Setton, Roni; Turner, Gary R
2017-10-01
Age-related brain changes leading to altered socioemotional functioning may increase vulnerability to financial exploitation. If confirmed, this would suggest a novel mechanism leading to heightened financial exploitation risk in older adults. Development of predictive neural markers could facilitate increased vigilance and prevention. In this preliminary study, we sought to identify structural and functional brain differences associated with financial exploitation in older adults. Financially exploited older adults (n = 13, 7 female) and a matched cohort of older adults who had been exposed to, but avoided, a potentially exploitative situation (n = 13, 7 female) were evaluated. Using magnetic resonance imaging, we examined cortical thickness and resting state functional connectivity. Behavioral data were collected using standardized cognitive assessments, self-report measures of mood and social functioning. The exploited group showed cortical thinning in anterior insula and posterior superior temporal cortices, regions associated with processing affective and social information, respectively. Functional connectivity encompassing these regions, within default and salience networks, was reduced, while between network connectivity was increased. Self-reported anger and hostility was higher for the exploited group. We observed financial exploitation associated with brain differences in regions involved in socioemotional functioning. These exploratory and preliminary findings suggest that alterations in brain regions implicated in socioemotional functioning may be a marker of financial exploitation risk. Large-scale, prospective studies are necessary to validate this neural mechanism, and develop predictive markers for use in clinical practice. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America.
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Shukla, Anupam
2018-03-01
Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.
A decomposition approach to the design of a multiferroic memory bit
NASA Astrophysics Data System (ADS)
Acevedo, Ruben; Liang, Cheng-Yen; Carman, Gregory P.; Sepulveda, Abdon E.
2017-06-01
The objective of this paper is to present a methodology for the design of a memory bit to minimize the energy required to write data at the bit level. By straining a ferromagnetic nickel nano-dot by means of a piezoelectric substrate, its magnetization vector rotates between two stable states defined as a 1 and 0 for digital memory. The memory bit geometry, actuation mechanism and voltage control law were used as design variables. The approach used was to decompose the overall design process into simpler sub-problems whose structure can be exploited for a more efficient solution. This method minimizes the number of fully dynamic coupled finite element analyses required to converge to a near optimal design, thus decreasing the computational time for the design process. An in-plane sample design problem is presented to illustrate the advantages and flexibility of the procedure.
Non-Boolean computing with nanomagnets for computer vision applications
NASA Astrophysics Data System (ADS)
Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep
2016-02-01
The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.
Bayes linear covariance matrix adjustment
NASA Astrophysics Data System (ADS)
Wilkinson, Darren J.
1995-12-01
In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.
Golightly, Andrew; Wilkinson, Darren J.
2011-01-01
Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583
Motion and force control for multiple cooperative manipulators
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz, Kenneth
1989-01-01
The motion and force control of multiple robot arms manipulating a commonly held object is addressed. A general control paradigm that decouples the motion and force control problems is introduced. For motion control, there are three natural choices: (1) joint torques, (2) arm-tip force vectors, and (3) the acceleration of a generalized coordinate. Choice (1) allows a class of relatively model-independent control laws by exploiting the Hamiltonian structure of the open-loop system; (2) and (3) require the full model information but produce simpler problems. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, the allocation of the desired end-effector control force to the joint actuators can be optimized; otherwise the internal force can be controlled about some set point. It is shown that effective force regulation can be achieved even if little model information is available.
Saffran, Jenny R.; Kirkham, Natasha Z.
2017-01-01
Perception involves making sense of a dynamic, multimodal environment. In the absence of mechanisms capable of exploiting the statistical patterns in the natural world, infants would face an insurmountable computational problem. Infant statistical learning mechanisms facilitate the detection of structure. These abilities allow the infant to compute across elements in their environmental input, extracting patterns for further processing and subsequent learning. In this selective review, we summarize findings that show that statistical learning is both a broad and flexible mechanism (supporting learning from different modalities across many different content areas) and input specific (shifting computations depending on the type of input and goal of learning). We suggest that statistical learning not only provides a framework for studying language development and object knowledge in constrained laboratory settings, but also allows researchers to tackle real-world problems, such as multilingualism, the role of ever-changing learning environments, and differential developmental trajectories. PMID:28793812
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; Govind, Niranjan; Yang, Chao
2017-12-01
We present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.
Balancing the Budget through Social Exploitation: Why Hard Times Are Even Harder for Some.
Tropman, John; Nicklett, Emily
2012-01-01
In all societies needs and wants regularly exceed resources. Thus societies are always in deficit; demand always exceeds supply and "balancing the budget" is a constant social problem. To make matters somewhat worse, research suggests that need- and want-fulfillment tends to further stimulate the cycle of wantseeking rather than satiating desire. Societies use various resource-allocation mechanisms, including price, to cope with gaps between wants and resources. Social exploitation is a second mechanism, securing labor from population segments that can be coerced or convinced to perform necessary work for free or at below-market compensation. Using practical examples, this article develops a theoretical framework for understanding social exploitation. It then offers case examples of how different segments of the population emerge as exploited groups in the United States, due to changes in social policies. These exploitative processes have been exacerbated and accelerated by the economic downturn that began in 2007.
How to do research fairly in an unjust world.
Ballantyne, Angela J
2010-06-01
International research, sponsored by for-profit companies, is regularly criticised as unethical on the grounds that it exploits research subjects in developing countries. Many commentators agree that exploitation occurs when the benefits of cooperative activity are unfairly distributed between the parties. To determine whether international research is exploitative we therefore need an account of fair distribution. Procedural accounts of fair bargaining have been popular solutions to this problem, but I argue that they are insufficient to protect against exploitation. I argue instead that a maximin principle of fair distribution provides a more compelling normative account of fairness in relationships characterised by extreme vulnerability and inequality of bargaining potential between the parties. A global tax on international research would provide a mechanism for implementing the maximin account of fair benefits. This model has the capacity to ensure fair benefits and thereby prevent exploitation in international research.
Fluency Heuristic: A Model of How the Mind Exploits a By-Product of Information Retrieval
ERIC Educational Resources Information Center
Hertwig, Ralph; Herzog, Stefan M.; Schooler, Lael J.; Reimer, Torsten
2008-01-01
Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the…
Co-Labeling for Multi-View Weakly Labeled Learning.
Xu, Xinxing; Li, Wen; Xu, Dong; Tsang, Ivor W
2016-06-01
It is often expensive and time consuming to collect labeled training samples in many real-world applications. To reduce human effort on annotating training samples, many machine learning techniques (e.g., semi-supervised learning (SSL), multi-instance learning (MIL), etc.) have been studied to exploit weakly labeled training samples. Meanwhile, when the training data is represented with multiple types of features, many multi-view learning methods have shown that classifiers trained on different views can help each other to better utilize the unlabeled training samples for the SSL task. In this paper, we study a new learning problem called multi-view weakly labeled learning, in which we aim to develop a unified approach to learn robust classifiers by effectively utilizing different types of weakly labeled multi-view data from a broad range of tasks including SSL, MIL and relative outlier detection (ROD). We propose an effective approach called co-labeling to solve the multi-view weakly labeled learning problem. Specifically, we model the learning problem on each view as a weakly labeled learning problem, which aims to learn an optimal classifier from a set of pseudo-label vectors generated by using the classifiers trained from other views. Unlike traditional co-training approaches using a single pseudo-label vector for training each classifier, our co-labeling approach explores different strategies to utilize the predictions from different views, biases and iterations for generating the pseudo-label vectors, making our approach more robust for real-world applications. Moreover, to further improve the weakly labeled learning on each view, we also exploit the inherent group structure in the pseudo-label vectors generated from different strategies, which leads to a new multi-layer multiple kernel learning problem. Promising results for text-based image retrieval on the NUS-WIDE dataset as well as news classification and text categorization on several real-world multi-view datasets clearly demonstrate that our proposed co-labeling approach achieves state-of-the-art performance for various multi-view weakly labeled learning problems including multi-view SSL, multi-view MIL and multi-view ROD.
Fast determination of structurally cohesive subgroups in large networks
Sinkovits, Robert S.; Moody, James; Oztan, B. Tolga; White, Douglas R.
2016-01-01
Structurally cohesive subgroups are a powerful and mathematically rigorous way to characterize network robustness. Their strength lies in the ability to detect strong connections among vertices that not only have no neighbors in common, but that may be distantly separated in the graph. Unfortunately, identifying cohesive subgroups is a computationally intensive problem, which has limited empirical assessments of cohesion to relatively small graphs of at most a few thousand vertices. We describe here an approach that exploits the properties of cliques, k-cores and vertex separators to iteratively reduce the complexity of the graph to the point where standard algorithms can be used to complete the analysis. As a proof of principle, we apply our method to the cohesion analysis of a 29,462-vertex biconnected component extracted from a 128,151-vertex co-authorship data set. PMID:28503215
Connected Component Model for Multi-Object Tracking.
He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan
2016-08-01
In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.
Monk, Christopher T; Barbier, Matthieu; Romanczuk, Pawel; Watson, James R; Alós, Josep; Nakayama, Shinnosuke; Rubenstein, Daniel I; Levin, Simon A; Arlinghaus, Robert
2018-06-01
Understanding how humans and other animals behave in response to changes in their environments is vital for predicting population dynamics and the trajectory of coupled social-ecological systems. Here, we present a novel framework for identifying emergent social behaviours in foragers (including humans engaged in fishing or hunting) in predator-prey contexts based on the exploration difficulty and exploitation potential of a renewable natural resource. A qualitative framework is introduced that predicts when foragers should behave territorially, search collectively, act independently or switch among these states. To validate it, we derived quantitative predictions from two models of different structure: a generic mathematical model, and a lattice-based evolutionary model emphasising exploitation and exclusion costs. These models independently identified that the exploration difficulty and exploitation potential of the natural resource controls the social behaviour of resource exploiters. Our theoretical predictions were finally compared to a diverse set of empirical cases focusing on fisheries and aquatic organisms across a range of taxa, substantiating the framework's predictions. Understanding social behaviour for given social-ecological characteristics has important implications, particularly for the design of governance structures and regulations to move exploited systems, such as fisheries, towards sustainability. Our framework provides concrete steps in this direction. © 2018 John Wiley & Sons Ltd/CNRS.
Structural issues affecting mixed methods studies in health research: a qualitative study.
O'Cathain, Alicia; Nicholl, Jon; Murphy, Elizabeth
2009-12-09
Health researchers undertake studies which combine qualitative and quantitative methods. Little attention has been paid to the structural issues affecting this mixed methods approach. We explored the facilitators and barriers to undertaking mixed methods studies in health research. Face-to-face semi-structured interviews with 20 researchers experienced in mixed methods research in health in the United Kingdom. Structural facilitators for undertaking mixed methods studies included a perception that funding bodies promoted this approach, and the multidisciplinary constituency of some university departments. Structural barriers to exploiting the potential of these studies included a lack of education and training in mixed methods research, and a lack of templates for reporting mixed methods articles in peer-reviewed journals. The 'hierarchy of evidence' relating to effectiveness studies in health care research, with the randomised controlled trial as the gold standard, appeared to pervade the health research infrastructure. Thus integration of data and findings from qualitative and quantitative components of mixed methods studies, and dissemination of integrated outputs, tended to occur through serendipity and effort, further highlighting the presence of structural constraints. Researchers are agents who may also support current structures - journal reviewers and editors, and directors of postgraduate training courses - and thus have the ability to improve the structural support for exploiting the potential of mixed methods research. The environment for health research in the UK appears to be conducive to mixed methods research but not to exploiting the potential of this approach. Structural change, as well as change in researcher behaviour, will be necessary if researchers are to fully exploit the potential of using mixed methods research.
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
Carbon Nanotubes: On the Origin of Helicity
NASA Astrophysics Data System (ADS)
Harutyunyan, Avetik
2015-03-01
The mechanism of helicity formation of carbon nanotubes still remains elusive that hinders their applications. Current explanations mainly rely on the planar interrelationship between the structure of nanotube and corresponding facet of catalyst in 2D geometry that could amend the structure of grown carbon layer, specifically due to the epitaxial interaction. Yet, the structure of carbon nanotube and circumference of the rims assume involvement of more than one facet i.e. it is 3D problem. By aiming this problem we find that the nanotube nucleation is initiated by cap formation via evolving of graphene embryo across the adjacent facets of catalyst particle. As a result the graphene embryos incorporate in their hexagonic network various polygons to accommodate the curved 3D geometry that initiates cap formation following by elongation of the circumferential rims. Based on these results, also on the census of nanotube caps and the fact that given cap fit only one nanotube wall, we consider carbon cap responsible for the helicity of carbon nanotube. This understanding could provide new avenues towards engineering particles to explicitly accommodate certain helicities via exploitation of the angular distribution of catalyst adjacent facets. Our recent progresses in production of carbon nanotubes, nanotube reinforced composites and their potential applications also will be presented.
Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.
Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo
2017-07-01
Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degenhardt, R.; PFH, Private University of Applied Sciences Goettingen, Composite Engineering Campus Stade; Araujo, F. C. de
European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. Formore » unstiffened cylindrical composite shells a proposal for a new design method is presented.« less
Solving the Traveling Salesman's Problem Using the African Buffalo Optimization.
Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam
2016-01-01
This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. This animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman's Problem and six difficult asymmetric instances from the TSPLIB. This study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd's collective exploits. The results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. The results obtained using the African Buffalo Optimization algorithm are very competitive.
Solving the Traveling Salesman's Problem Using the African Buffalo Optimization
Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam
2016-01-01
This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. This animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman's Problem and six difficult asymmetric instances from the TSPLIB. This study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd's collective exploits. The results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. The results obtained using the African Buffalo Optimization algorithm are very competitive. PMID:26880872
Modularity of Protein Folds as a Tool for Template-Free Modeling of Structures.
Vallat, Brinda; Madrid-Aliste, Carlos; Fiser, Andras
2015-08-01
Predicting the three-dimensional structure of proteins from their amino acid sequences remains a challenging problem in molecular biology. While the current structural coverage of proteins is almost exclusively provided by template-based techniques, the modeling of the rest of the protein sequences increasingly require template-free methods. However, template-free modeling methods are much less reliable and are usually applicable for smaller proteins, leaving much space for improvement. We present here a novel computational method that uses a library of supersecondary structure fragments, known as Smotifs, to model protein structures. The library of Smotifs has saturated over time, providing a theoretical foundation for efficient modeling. The method relies on weak sequence signals from remotely related protein structures to create a library of Smotif fragments specific to the target protein sequence. This Smotif library is exploited in a fragment assembly protocol to sample decoys, which are assessed by a composite scoring function. Since the Smotif fragments are larger in size compared to the ones used in other fragment-based methods, the proposed modeling algorithm, SmotifTF, can employ an exhaustive sampling during decoy assembly. SmotifTF successfully predicts the overall fold of the target proteins in about 50% of the test cases and performs competitively when compared to other state of the art prediction methods, especially when sequence signal to remote homologs is diminishing. Smotif-based modeling is complementary to current prediction methods and provides a promising direction in addressing the structure prediction problem, especially when targeting larger proteins for modeling.
Compression of contour data through exploiting curve-to-curve dependence
NASA Technical Reports Server (NTRS)
Yalabik, N.; Cooper, D. B.
1975-01-01
An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
Designing Efficient Self-Diagnosis Activities in the Physics Classroom
NASA Astrophysics Data System (ADS)
Safadi, Rafi'
2017-12-01
Self-diagnosis (SD) activities require students to self-diagnose their solutions to problems that they solved on their own. This involves identifying where they went wrong and then explaining the nature of their errors—why they went wrong—aided by some form of support. Worked examples (WEs) are often used to support students in SD activities. A WE is a step-by-step demonstration of how to solve a problem. One unresolved issue is why students fail to exploit WEs in SD exercises. Yerushalmi et al., for instance, provided students with written WEs and asked them to self-diagnose their solutions with respect to these WEs. These authors found no correlation between students' SD performance and their subsequent problem-solving performance on transfer problems, suggesting that students had only superficially exploited the written WEs. The aim of this article is to describe a new SD activity that was developed to prompt students to effectively use written WEs when self-diagnosing, and to examine its effectiveness in advancing students' learning in physics.
Anthropogenic effects on forest ecosystems at various spatio-temporal scales.
Bredemeier, Michael
2002-03-27
The focus in this review of long-term effects on forest ecosystems is on human impact. As a classification of this differentiated and complex matter, three domains of long-term effects with different scales in space and time are distinguished: Exploitation and conversion history of forests in areas of extended human settlement, Long-range air pollution and acid deposition in industrialized regions, Current global loss of forests and soil degradation. There is an evident link between the first and the third point in the list. Cultivation of primary forestland--with its tremendous effects on land cover--took place in Europe many centuries ago and continued for centuries. Deforestation today is a phenomenon predominantly observed in the developing countries, yet it threatens biotic and soil resources on a global scale. Acidification of forest soils caused by long-range air pollution from anthropogenic emission sources is a regional to continental problem in industrialized parts of the world. As a result of emission reduction legislation, atmospheric acid deposition is currently on the retreat in the richer industrialized regions (e.g., Europe, U.S., Japan); however, because many other regions of the world are at present rapidly developing their polluting industries (e.g., China and India), "acid rain" will most probably remain a serious ecological problem on regional scales. It is believed to have caused considerable destabilization of forest ecosystems, adding to the strong structural and biogeochemical impacts resulting from exploitation history. Deforestation and soil degradation cause the most pressing ecological problems for the time being, at least on the global scale. In many of those regions where loss of forests and soils is now high, it may be extremely difficult or impossible to restore forest ecosystems and soil productivity. Moreover, the driving forces, which are predominantly of a demographic and socioeconomic nature, do not yet seem to be lessening in strength. It can only be hoped that a wise policy of international cooperation and shared aims can cope with this problem in the future.
Disease-mongering through clinical trials.
González-Moreno, María; Saborido, Cristian; Teira, David
2015-06-01
Our goal in this paper is to articulate a precise concept of at least a certain kind of disease-mongering, showing how pharmaceutical marketing can commercially exploit certain diseases when their best definition is given through the success of a treatment in a clinical trial. We distinguish two types of disease-mongering according to the way they exploit the definition of the trial population for marketing purposes. We argue that behind these two forms of disease-mongering there are two well-known problems in the statistical methodology of clinical trials (the reference class problem and the distinction between statistical and clinical significance). Overcoming them is far from simple. Copyright © 2015 Elsevier Ltd. All rights reserved.
Inverse problems in complex material design: Applications to non-crystalline solids
NASA Astrophysics Data System (ADS)
Biswas, Parthapratim; Drabold, David; Elliott, Stephen
The design of complex amorphous materials is one of the fundamental problems in disordered condensed-matter science. While impressive developments of ab-initio simulation methods during the past several decades have brought tremendous success in understanding materials property from micro- to mesoscopic length scales, a major drawback is that they fail to incorporate existing knowledge of the materials in simulation methodologies. Since an essential feature of materials design is the synergy between experiment and theory, a properly developed approach to design materials should be able to exploit all available knowledge of the materials from measured experimental data. In this talk, we will address the design of complex disordered materials as an inverse problem involving experimental data and available empirical information. We show that the problem can be posed as a multi-objective non-convex optimization program, which can be addressed using a number of recently-developed bio-inspired global optimization techniques. In particular, we will discuss how a population-based stochastic search procedure can be used to determine the structure of non-crystalline solids (e.g. a-SiH, a-SiO2, amorphous graphene, and Fe and Ni clusters). The work is partially supported by NSF under Grant Nos. DMR 1507166 and 1507670.
Oram, Siân; Stöckl, Heidi; Busza, Joanna; Howard, Louise M; Zimmerman, Cathy
2012-01-01
There is very limited evidence on the health consequences of human trafficking. This systematic review reports on studies investigating the prevalence and risk of violence while trafficked and the prevalence and risk of physical, mental, and sexual health problems, including HIV, among trafficked people. We conducted a systematic review comprising a search of Medline, PubMed, PsycINFO, EMBASE, and Web of Science, hand searches of reference lists of included articles, citation tracking, and expert recommendations. We included peer-reviewed papers reporting on the prevalence or risk of violence while trafficked and/or on the prevalence or risk of any measure of physical, mental, or sexual health among trafficked people. Two reviewers independently screened papers for eligibility and appraised the quality of included studies. The search identified 19 eligible studies, all of which reported on trafficked women and girls only and focused primarily on trafficking for sexual exploitation. The review suggests a high prevalence of violence and of mental distress among women and girls trafficked for sexual exploitation. The random effects pooled prevalence of diagnosed HIV was 31.9% (95% CI 21.3%-42.4%) in studies of women accessing post-trafficking support in India and Nepal, but the estimate was associated with high heterogeneity (I² = 83.7%). Infection prevalence may be related as much to prevalence rates in women's areas of origin or exploitation as to the characteristics of their experience. Findings are limited by the methodological weaknesses of primary studies and their poor comparability and generalisability. Although limited, existing evidence suggests that trafficking for sexual exploitation is associated with violence and a range of serious health problems. Further research is needed on the health of trafficked men, individuals trafficked for other forms of exploitation, and effective health intervention approaches.
Oram, Siân; Stöckl, Heidi; Busza, Joanna; Howard, Louise M.; Zimmerman, Cathy
2012-01-01
Background There is very limited evidence on the health consequences of human trafficking. This systematic review reports on studies investigating the prevalence and risk of violence while trafficked and the prevalence and risk of physical, mental, and sexual health problems, including HIV, among trafficked people. Methods and Findings We conducted a systematic review comprising a search of Medline, PubMed, PsycINFO, EMBASE, and Web of Science, hand searches of reference lists of included articles, citation tracking, and expert recommendations. We included peer-reviewed papers reporting on the prevalence or risk of violence while trafficked and/or on the prevalence or risk of any measure of physical, mental, or sexual health among trafficked people. Two reviewers independently screened papers for eligibility and appraised the quality of included studies. The search identified 19 eligible studies, all of which reported on trafficked women and girls only and focused primarily on trafficking for sexual exploitation. The review suggests a high prevalence of violence and of mental distress among women and girls trafficked for sexual exploitation. The random effects pooled prevalence of diagnosed HIV was 31.9% (95% CI 21.3%–42.4%) in studies of women accessing post-trafficking support in India and Nepal, but the estimate was associated with high heterogeneity (I 2 = 83.7%). Infection prevalence may be related as much to prevalence rates in women's areas of origin or exploitation as to the characteristics of their experience. Findings are limited by the methodological weaknesses of primary studies and their poor comparability and generalisability. Conclusions Although limited, existing evidence suggests that trafficking for sexual exploitation is associated with violence and a range of serious health problems. Further research is needed on the health of trafficked men, individuals trafficked for other forms of exploitation, and effective health intervention approaches. Please see later in the article for the Editors' Summary PMID:22666182
Kalman Filter Tracking on Parallel Architectures
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2015-12-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter [2]. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.
A semi-automatic method for extracting thin line structures in images as rooted tree network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brazzini, Jacopo; Dillard, Scott; Soille, Pierre
2010-01-01
This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less
Structural issues affecting mixed methods studies in health research: a qualitative study
2009-01-01
Background Health researchers undertake studies which combine qualitative and quantitative methods. Little attention has been paid to the structural issues affecting this mixed methods approach. We explored the facilitators and barriers to undertaking mixed methods studies in health research. Methods Face-to-face semi-structured interviews with 20 researchers experienced in mixed methods research in health in the United Kingdom. Results Structural facilitators for undertaking mixed methods studies included a perception that funding bodies promoted this approach, and the multidisciplinary constituency of some university departments. Structural barriers to exploiting the potential of these studies included a lack of education and training in mixed methods research, and a lack of templates for reporting mixed methods articles in peer-reviewed journals. The 'hierarchy of evidence' relating to effectiveness studies in health care research, with the randomised controlled trial as the gold standard, appeared to pervade the health research infrastructure. Thus integration of data and findings from qualitative and quantitative components of mixed methods studies, and dissemination of integrated outputs, tended to occur through serendipity and effort, further highlighting the presence of structural constraints. Researchers are agents who may also support current structures - journal reviewers and editors, and directors of postgraduate training courses - and thus have the ability to improve the structural support for exploiting the potential of mixed methods research. Conclusion The environment for health research in the UK appears to be conducive to mixed methods research but not to exploiting the potential of this approach. Structural change, as well as change in researcher behaviour, will be necessary if researchers are to fully exploit the potential of using mixed methods research. PMID:20003210
The Adolescent Runaway: A National Problem.
ERIC Educational Resources Information Center
Ritter, Bruce
1979-01-01
The author discusses the problems of teenage runaways: abuse which forces many to leave home, violence and sexual exploitation, lack of help from the child welfare bureaucracy. He illustrates with descriptions of several youngsters at his Covenant House crisis center, Under Twenty-One, in New York City. (SJL)
Numerical solution of a conspicuous consumption model with constant control delay☆
Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea
2011-01-01
We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871
Multi-object segmentation using coupled nonparametric shape and relative pose priors
NASA Astrophysics Data System (ADS)
Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep
2009-02-01
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.
Luo, Bin; Liu, Shaomin; Zhi, Linjie
2012-03-12
A 'gold rush' has been triggered all over the world for exploiting the possible applications of graphene-based nanomaterials. For this purpose, two important problems have to be solved; one is the preparation of graphene-based nanomaterials with well-defined structures, and the other is the controllable fabrication of these materials into functional devices. This review gives a brief overview of the recent research concerning chemical and thermal approaches toward the production of well-defined graphene-based nanomaterials and their applications in energy-related areas, including solar cells, lithium ion secondary batteries, supercapacitors, and catalysis. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An Efficient Means of Adaptive Refinement Within Systems of Overset Grids
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1996-01-01
An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.
A vectorized Lanczos eigensolver for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1990-01-01
The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.
English, Abigail
2011-08-01
Sexual exploitation and trafficking of the young and vulnerable has devastating consequences for their physical and emotional development, health, and well-being. The horrific treatment they suffer bears the hallmarks of evil made manifest. Governments have enacted laws pursuant to international treaties, conventions, and protocols. Nonprofit and nongovernmental organizations (NGOs) are working to prevent young people from being exploited and trafficked, to identify victims, and to provide services to survivors. Progress in addressing the problem is haltingly slow in relation to its magnitude. The prevalence and persistence of this phenomenon is an ethical, legal, and human rights disgrace.
Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)
2000-01-01
A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.
Traction patterns of tumor cells.
Ambrosi, D; Duperray, A; Peschetola, V; Verdier, C
2009-01-01
The traction exerted by a cell on a planar deformable substrate can be indirectly obtained on the basis of the displacement field of the underlying layer. The usual methodology used to address this inverse problem is based on the exploitation of the Green tensor of the linear elasticity problem in a half space (Boussinesq problem), coupled with a minimization algorithm under force penalization. A possible alternative strategy is to exploit an adjoint equation, obtained on the basis of a suitable minimization requirement. The resulting system of coupled elliptic partial differential equations is applied here to determine the force field per unit surface generated by T24 tumor cells on a polyacrylamide substrate. The shear stress obtained by numerical integration provides quantitative insight of the traction field and is a promising tool to investigate the spatial pattern of force per unit surface generated in cell motion, particularly in the case of such cancer cells.
Repeated causal decision making.
Hagmayer, York; Meder, Björn
2013-01-01
Many of our decisions refer to actions that have a causal impact on the external environment. Such actions may not only allow for the mere learning of expected values or utilities but also for acquiring knowledge about the causal structure of our world. We used a repeated decision-making paradigm to examine what kind of knowledge people acquire in such situations and how they use their knowledge to adapt to changes in the decision context. Our studies show that decision makers' behavior is strongly contingent on their causal beliefs and that people exploit their causal knowledge to assess the consequences of changes in the decision problem. A high consistency between hypotheses about causal structure, causally expected values, and actual choices was observed. The experiments show that (a) existing causal hypotheses guide the interpretation of decision feedback, (b) consequences of decisions are used to revise existing causal beliefs, and (c) decision makers use the experienced feedback to induce a causal model of the choice situation even when they have no initial causal hypotheses, which (d) enables them to adapt their choices to changes of the decision problem. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Biomimetic surface structuring using cylindrical vector femtosecond laser beams
NASA Astrophysics Data System (ADS)
Skoulas, Evangelos; Manousaki, Alexandra; Fotakis, Costas; Stratakis, Emmanuel
2017-03-01
We report on a new, single-step and scalable method to fabricate highly ordered, multi-directional and complex surface structures that mimic the unique morphological features of certain species found in nature. Biomimetic surface structuring was realized by exploiting the unique and versatile angular profile and the electric field symmetry of cylindrical vector (CV) femtosecond (fs) laser beams. It is shown that, highly controllable, periodic structures exhibiting sizes at nano-, micro- and dual- micro/nano scales can be directly written on Ni upon line and large area scanning with radial and azimuthal polarization beams. Depending on the irradiation conditions, new complex multi-directional nanostructures, inspired by the Shark’s skin morphology, as well as superhydrophobic dual-scale structures mimicking the Lotus’ leaf water repellent properties can be attained. It is concluded that the versatility and features variations of structures formed is by far superior to those obtained via laser processing with linearly polarized beams. More important, by exploiting the capabilities offered by fs CV fields, the present technique can be further extended to fabricate even more complex and unconventional structures. We believe that our approach provides a new concept in laser materials processing, which can be further exploited for expanding the breadth and novelty of applications.
NASA Astrophysics Data System (ADS)
Li, Jiang; Green, Alexander A.; Yan, Hao; Fan, Chunhai
2017-11-01
Nucleic acids have attracted widespread attention due to the simplicity with which they can be designed to form discrete structures and programmed to perform specific functions at the nanoscale. The advantages of DNA/RNA nanotechnology offer numerous opportunities for in-cell and in-vivo applications, and the technology holds great promise to advance the growing field of synthetic biology. Many elegant examples have revealed the potential in integrating nucleic acid nanostructures in cells and in vivo where they can perform important physiological functions. In this Review, we summarize the current abilities of DNA/RNA nanotechnology to realize applications in live cells and then discuss the key problems that must be solved to fully exploit the useful properties of nanostructures. Finally, we provide viewpoints on how to integrate the tools provided by DNA/RNA nanotechnology and related new technologies to construct nucleic acid nanostructure-based molecular circuitry for synthetic biology.
Global/local stress analysis of composite panels
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Knight, Norman F., Jr.
1989-01-01
A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.
Bayesian sparse channel estimation
NASA Astrophysics Data System (ADS)
Chen, Chulong; Zoltowski, Michael D.
2012-05-01
In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.
Recent advances in racemic protein crystallography.
Yan, Bingjia; Ye, Linzhi; Xu, Weiliang; Liu, Lei
2017-09-15
Solution of the three-dimensional structures of proteins is a critical step in deciphering the molecular mechanisms of their bioactivities. Among the many approaches for obtaining protein crystals, racemic protein crystallography has been developed as a unique method to solve the structures of an increasing number of proteins. Exploiting unnatural protein enantiomers in crystallization and resolution, racemic protein crystallography manifests two major advantages that are 1) to increase the success rate of protein crystallization, and 2) to obviate the phase problem in X-ray diffraction. The requirement of unnatural protein enantiomers in racemic protein crystallography necessitates chemical protein synthesis, which is hitherto accomplished through solid phase peptide synthesis and chemical ligation reactions. This review highlights the fundamental ideas of racemic protein crystallography and surveys the harvests in the field of racemic protein crystallography over the last five years from early 2012 to late 2016. Copyright © 2017. Published by Elsevier Ltd.
Mixed-Timescale Per-Group Hybrid Precoding for Multiuser Massive MIMO Systems
NASA Astrophysics Data System (ADS)
Teng, Yinglei; Wei, Min; Liu, An; Lau, Vincent; Zhang, Yong
2018-05-01
Considering the expensive radio frequency (RF) chain, huge training overhead and feedback burden issues in massive MIMO, in this letter, we propose a mixed-timescale per-group hybrid precoding (MPHP) scheme under an adaptive partially-connected RF precoding structure (PRPS), where the RF precoder is implemented using an adaptive connection network (ACN) and M analog phase shifters (APSs), where M is the number of antennas at the base station (BS). Exploiting the mixed-time stage channel state information (CSI) structure, the joint-design of ACN and APSs is formulated as a statistical signal-to-leakage-and-noise ratio (SSLNR) maximization problem, and a heuristic group RF precoding (GRFP) algorithm is proposed to provide a near-optimal solution. Simulation results show that the proposed design advances at better energy efficiency (EE) and lower hardware cost, CSI signaling overhead and computational complexity than the conventional hybrid precoding (HP) schemes.
Chowdhury, S F; Villamor, V B; Guerrero, R H; Leal, I; Brun, R; Croft, S L; Goodman, J M; Maes, L; Ruiz-Perez, L M; Pacanowska, D G; Gilbert, I H
1999-10-21
This paper concerns the design, synthesis, and evaluation of inhibitors of leishmanial and trypanosomal dihydrofolate reductase. Initially study was made of the structures of the leishmanial and human enzyme active sites to see if there were significant differences which could be exploited for selective drug design. Then a series of compounds were synthesized based on 5-benzyl-2, 4-diaminopyrimidines. These compounds were assayed against the protozoan and human enzymes and showed selectivity for the protozoan enzymes. The structural data was then used to rationalize the enzyme assay data. Compounds were also tested against the clinically relevant forms of the intact parasite. Activity was seen against the trypanosomes for a number of compounds. The compounds were in general less active against Leishmania. This latter result may be due to uptake problems. Two of the compounds also showed some in vivo activity in a model of African trypanosomiasis.
Wavelet Analysis for RADARSAT Exploitation: Demonstration of Algorithms for Maritime Surveillance
2007-02-01
this study , we demonstrate wavelet analysis for exploitation of RADARSAT ocean imagery, including wind direction estimation, oceanic and atmospheric ...of image striations that can arise as a texture pattern caused by turbulent coherent structures in the marine atmospheric boundary layer. The image...associated change in the pattern texture (i.e., the nature of the turbulent atmospheric structures) across the front. Due to the large spatial scale of
NASA Astrophysics Data System (ADS)
Degenhardt, Richard
2014-06-01
Space industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite space and aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis. Currently, the potential of composite light weight structures, which are prone to buckling, is not fully exploited as appropriate guidelines in the field of space applications do not exist. This paper deals with the state-of-the-art advances and challenges related to coupled stability analysis of composite structures which show very complex stability behaviour. Improved design guidelines for composites structures are still under development. This paper gives a short state-of-the-art and presents a proposal for a future design guideline.
Main principles of developing exploitation models of semiconductor devices
NASA Astrophysics Data System (ADS)
Gradoboev, A. V.; Simonova, A. V.
2018-05-01
The paper represents primary tasks, solutions of which allow to develop the exploitation modes of semiconductor devices taking into account complex and combined influence of ionizing irradiation and operation factors. The structure of the exploitation model of the semiconductor device is presented, which is based on radiation and reliability models. Furthermore, it was shown that the exploitation model should take into account complex and combine influence of various ionizing irradiation types and operation factors. The algorithm of developing the exploitation model of the semiconductor devices is proposed. The possibility of creating the radiation model of Schottky barrier diode, Schottky field-effect transistor and Gunn diode is shown based on the available experimental data. The basic exploitation model of IR-LEDs based upon double AlGaAs heterostructures is represented. The practical application of the exploitation models will allow to output the electronic products with guaranteed operational properties.
Sexual Exploitation of Children and Youth. Human Resources Series.
ERIC Educational Resources Information Center
Friend, Shelley A.
1983-01-01
This issue brief explores the problem of child pornography and teenage prostitution and examines some of the strategies federal, state, and local governments employ to address these social problems. After a brief review of Congressional actions and Supreme Court decisions, state statutes affecting pornography and prostitution are reviewed, and…
Application of remote sensing to solution of ecological problems
NASA Technical Reports Server (NTRS)
Adelman, A.
1972-01-01
The application of remote sensing techniques to solving ecological problems is discussed. The three phases of environmental ecological management are examined. The differences between discovery and exploitation of natural resources and their ecological management are described. The specific application of remote sensing to water management is developed.
Balancing the Budget through Social Exploitation: Why Hard Times Are Even Harder for Some
Tropman, John; Nicklett, Emily
2013-01-01
In all societies needs and wants regularly exceed resources. Thus societies are always in deficit; demand always exceeds supply and “balancing the budget” is a constant social problem. To make matters somewhat worse, research suggests that need- and want-fulfillment tends to further stimulate the cycle of wantseeking rather than satiating desire. Societies use various resource-allocation mechanisms, including price, to cope with gaps between wants and resources. Social exploitation is a second mechanism, securing labor from population segments that can be coerced or convinced to perform necessary work for free or at below-market compensation. Using practical examples, this article develops a theoretical framework for understanding social exploitation. It then offers case examples of how different segments of the population emerge as exploited groups in the United States, due to changes in social policies. These exploitative processes have been exacerbated and accelerated by the economic downturn that began in 2007. PMID:23936753
Exploiting risk-reward structures in decision making under uncertainty.
Leuker, Christina; Pachur, Thorsten; Hertwig, Ralph; Pleskac, Timothy J
2018-06-01
People often have to make decisions under uncertainty-that is, in situations where the probabilities of obtaining a payoff are unknown or at least difficult to ascertain. One solution to this problem is to infer the probability from the magnitude of the potential payoff and thus exploit the inverse relationship between payoffs and probabilities that occurs in many domains in the environment. Here, we investigated how the mind may implement such a solution: (1) Do people learn about risk-reward relationships from the environment-and if so, how? (2) How do learned risk-reward relationships impact preferences in decision-making under uncertainty? Across three experiments (N = 352), we found that participants can learn risk-reward relationships from being exposed to choice environments with a negative, positive, or uncorrelated risk-reward relationship. They were able to learn the associations both from gambles with explicitly stated payoffs and probabilities (Experiments 1 & 2) and from gambles about epistemic events (Experiment 3). In subsequent decisions under uncertainty, participants often exploited the learned association by inferring probabilities from the magnitudes of the payoffs. This inference systematically influenced their preferences under uncertainty: Participants who had been exposed to a negative risk-reward relationship tended to prefer the uncertain option over a smaller sure option for low payoffs, but not for high payoffs. This pattern reversed in the positive condition and disappeared in the uncorrelated condition. This adaptive change in preferences is consistent with the use of the risk-reward heuristic. Copyright © 2018 Elsevier B.V. All rights reserved.
The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems
NASA Astrophysics Data System (ADS)
Granmo, Ole-Christoffer
The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.
1985-06-01
released when matter and antimatter annihilate. It reviews some of the funiamental ’lifficulties in producing antimatter and moans for storing it. If...summer of 1983 Rand examined the possibilities for exploiting the high energy release resulting from matter -antimatter annihilation. The resultant...several issues inherent in exploiting the energy released when matter and antimatter annihilate. Some of the fundamental difficulties in producing
The role of precise time in IFF
NASA Technical Reports Server (NTRS)
Bridge, W. M.
1982-01-01
The application of precise time to the identification of friend or foe (IFF) problem is discussed. The simple concept of knowing when to expect each signal is exploited in a variety of ways to achieve an IFF system which is hard to detect, minimally exploitable and difficult to jam. Precise clocks are the backbone of the concept and the various candidates for this role are discussed. The compact rubidium-controlled oscillator is the only practical candidate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...
2017-12-01
In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...
2017-08-24
Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Extending compile-time reverse mode and exploiting partial separability in ADIFOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R[sup n] [yields] R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process
NASA Astrophysics Data System (ADS)
Migawa, Klaudiusz
2012-12-01
The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.
Cheng, Xu-Dong; Feng, Liang; Zhang, Ming-Hua; Gu, Jun-Fei; Jia, Xiao-Bin
2014-10-01
The purpose of the secondary exploitation of Chinese medicine is to improve the quality of Chinese medicine products, enhance core competitiveness, for better use in clinical practice, and more effectively solve the patient suffering. Herbs, extraction, separation, refreshing, preparation and quality control are all involved in the industry promotion of Chinese medicine secondary exploitation of industrial production. The Chinese medicine quality improvement and industry promotion could be realized with the whole process of process optimization, quality control, overall processes improvement. Based on the "component structure theory", "multi-dimensional structure & process dynamic quality control system" and systematic and holistic character of Chinese medicine, impacts of whole process were discussed. Technology systems of Chinese medicine industry promotion was built to provide theoretical basis for improving the quality and efficacy of the secondary development of traditional Chinese medicine products.
Kim, Seung-Won; Koh, Je-Sung; Lee, Jong-Gu; Ryu, Junghyun; Cho, Maenghyo; Cho, Kyu-Jin
2014-09-01
The Venus flytrap uses bistability, the structural characteristic of its leaf, to actuate the leaf's rapid closing motion for catching its prey. This paper presents a flytrap-inspired robot and novel actuation mechanism that exploits the structural characteristics of this structure and a developable surface. We focus on the concept of exploiting structural characteristics for actuation. Using shape memory alloy (SMA), the robot actuates artificial leaves made from asymmetrically laminated carbon fiber reinforced prepregs. We exploit two distinct structural characteristics of the leaves. First, the bistability acts as an implicit actuator enabling rapid morphing motion. Second, the developable surface has a kinematic constraint that constrains the curvature of the artificial leaf. Due to this constraint, the curved artificial leaf can be unbent by bending the straight edge orthogonal to the curve. The bending propagates from one edge to the entire surface and eventually generates an overall shape change. The curvature change of the artificial leaf is 18 m(-1) within 100 ms when closing. Experiments show that these actuation mechanisms facilitate the generation of a rapid and large morphing motion of the flytrap robot by one-way actuation of the SMA actuators at a local position.
Evaluation methodology for query-based scene understanding systems
NASA Astrophysics Data System (ADS)
Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.
2015-05-01
In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.
NASA Astrophysics Data System (ADS)
Arango-Galvan, C.; Flores-Marquez, E.; Prol-Ledesma, R.; Working Group, I.
2007-05-01
The lack of sufficient drinking water in México has become a very serious problem, especially in the northern desert regions of the country. In order to give a real solution to this phenomenon the IMPULSA research program has been created to develope novel technologies based on desalination of sea and brackish water using renewable sources of energy to face the problem. The Punta Banda geothermal anomaly is located towards the northern part of Baja California Peninsula (Mexico). High water temperatures in some wells along the coast depicted a geothermal anomaly. An audiomagnetotelluric survey was carried out in the area as a preliminary study, both to understand the process generating these anomalous temperatures and to assess its potential exploitation to supply hot water to desalination plants. Among the electromagnetic methods, the audiomagnetotellurics (AMT) method is appropriated for deep groundwater and geothermal studies. The survey consisted of 27 AMT stations covering a 5 km profile along the Agua Blanca Fault. The employed array allowed us to characterize the geoelectrical properties of the main structures up to 500 m depth. Two main geoelectrical zones were identified: 1) a shallow low resistivity media located at the central portion of the profile, coinciding with the Maneadero valley and 2) two high resitivity structures bordering the conductive zone possibly related to NS faulting, already identified by previous geophysical studies. These results suggest that the main geothermal anomalies are controlled by the dominant structural regime in the zone.
Sharing the Benefits of Research Fairly: Two Approaches
Millum, Joseph
2016-01-01
Research projects sponsored by rich countries or companies and carried out in developing countries are frequently described as exploitative. One important debate about the prevention of exploitation in research centers on whether and how clinical research in developing countries should be responsive to local health problems. This paper analyses the responsiveness debate and draws out more general lessons for how policy makers can prevent exploitation in various research contexts. There are two independent ways to do this in the face of entrenched power differences: to impose restrictions on the content of benefit-sharing arrangements, and to institute independent effective oversight. Which method should be chosen is highly dependent on context. PMID:21947808
Bionomic Exploitation of a Ratio-Dependent Predator-Prey System
ERIC Educational Resources Information Center
Maiti, Alakes; Patra, Bibek; Samanta, G. P.
2008-01-01
The present article deals with the problem of combined harvesting of a Michaelis-Menten-type ratio-dependent predator-prey system. The problem of determining the optimal harvest policy is solved by invoking Pontryagin's Maximum Principle. Dynamic optimization of the harvest policy is studied by taking the combined harvest effort as a dynamic…
Producing and Scrounging during Problem Based Learning
ERIC Educational Resources Information Center
Vickery, William L.
2013-01-01
When problem based learning occurs in a social context it is open to a common social behaviour, scrounging. In the animal behaviour literature, scroungers do not attempt to find resources themselves but rather exploit resources found by other group members (referred to as producers). We know from studies of animal behaviour (including humans) that…
Understand the Big Picture So You Can Plan for Network Security
ERIC Educational Resources Information Center
Cervone, Frank
2005-01-01
This article discusses network security for libraries. It indicates that there were only six exploit (security exposure) problems, worldwide, reported to the CERT Coordination Center back in 1988. In that year, the CERT had just been established to provide a clearinghouse for exchanging information about network security problems. By 2003, the…
FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry
NASA Astrophysics Data System (ADS)
Dai, Jisheng; Liu, An; Lau, Vincent K. N.
2018-05-01
This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.
Passivity-based control with collision avoidance for a hub-beam spacecraft
NASA Astrophysics Data System (ADS)
Wen, Hao; Chen, Ti; Jin, Dongping; Hu, Haiyan
2017-01-01
For the application of robotically assembling large space structures, a feedback control law is synthesized for transitional and rotational maneuvers of a 'tug' spacecraft in order to transport a flexible element to a desired position without colliding with other space bodies. The flexible element is treated as a long beam clamped to the 'tug' spacecraft modelled as a rigid hub. First, the physical property of passivity of Euler-Lagrange system is exploited to design the position and attitude controllers by taking a simpler obstacle-free control problem into account. To reduce sensing and actuating requirements, the vibration modes of the beam appendage are supposed to be not directly measured and actuated on. Besides, the requirements of measuring velocities are removed with the aid of a dynamic extension technique. Second, the bounding boxes in the form of super-quadric surfaces are exploited to enclose the maximal extents of the obstacles and the hub-beam spacecraft. The collision avoidance between bounding boxes is achieved by applying additional repulsive force and torque to the spacecraft based on the method of artificial potential field. Finally, the effectiveness of proposed control scheme is numerically demonstrated via case studies.
Odor Landscapes in Turbulent Environments
NASA Astrophysics Data System (ADS)
Celani, Antonio; Villermaux, Emmanuel; Vergassola, Massimo
2014-10-01
The olfactory system of male moths is exquisitely sensitive to pheromones emitted by females and transported in the environment by atmospheric turbulence. Moths respond to minute amounts of pheromones, and their behavior is sensitive to the fine-scale structure of turbulent plumes where pheromone concentration is detectible. The signal of pheromone whiffs is qualitatively known to be intermittent, yet quantitative characterization of its statistical properties is lacking. This challenging fluid dynamics problem is also relevant for entomology, neurobiology, and the technological design of olfactory stimulators aimed at reproducing physiological odor signals in well-controlled laboratory conditions. Here, we develop a Lagrangian approach to the transport of pheromones by turbulent flows and exploit it to predict the statistics of odor detection during olfactory searches. The theory yields explicit probability distributions for the intensity and the duration of pheromone detections, as well as their spacing in time. Predictions are favorably tested by using numerical simulations, laboratory experiments, and field data for the atmospheric surface layer. The resulting signal of odor detections lends itself to implementation with state-of-the-art technologies and quantifies the amount and the type of information that male moths can exploit during olfactory searches.
Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions
Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas
2012-01-01
We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742
Tuberculosis: An Inorganic Medicinal Chemistry Perspective.
Viganor, Livia; Skerry, Ciaran; McCann, Malachy; Devereux, Michael
2015-01-01
Tuberculosis (TB) which is caused by the resilient pathogen Mycobacterium tuberculosis (MTB) has re-emerged to become a leading public health problem in the world. The growing number of multi-drug resistant MTB strains and the more recently emerging problem with the extensively drug resistant strains of the pathogen are greatly undermining conventional anti-TB therapeutic strategies which are lengthy and expose patients to toxicity and other unwanted side effects. The search for new anti-TB drugs essentially involves either the repurposing of existing organic drugs which are now off patent and already FDA approved, the synthesis of modified analogues of existing organic drugs, with the aim of shortening and improving drug treatment for the disease, or the search for novel structures that offer the possibility of new mechanisms of action against the mycobacterium. Inorganic medicinal chemistry offers an alternative to organic drugs through opportunities for the design of therapeutics that target different biochemical pathways. The incorporation of metal ions into the molecular structure of a potential drug offers the medicinal chemist an opportunity to exploit structural diversity, have access to various oxidation states of the metal and also offer the possibility of enhancing the activity of an established organic drug through its coordination to the metal centre. In this review, we summarize what is currently known about the antitubercular capability of metal complexes, their mechanisms of action and speculate on their potential applications in the clinic.
Progressive Stochastic Reconstruction Technique (PSRT) for cryo electron tomography.
Turoňová, Beata; Marsalek, Lukas; Davidovič, Tomáš; Slusallek, Philipp
2015-03-01
Cryo Electron Tomography (cryoET) plays an essential role in Structural Biology, as it is the only technique that allows to study the structure of large macromolecular complexes in their close to native environment in situ. The reconstruction methods currently in use, such as Weighted Back Projection (WBP) or Simultaneous Iterative Reconstruction Technique (SIRT), deliver noisy and low-contrast reconstructions, which complicates the application of high-resolution protocols, such as Subtomogram Averaging (SA). We propose a Progressive Stochastic Reconstruction Technique (PSRT) - a novel iterative approach to tomographic reconstruction in cryoET based on Monte Carlo random walks guided by Metropolis-Hastings sampling strategy. We design a progressive reconstruction scheme to suit the conditions present in cryoET and apply it successfully to reconstructions of macromolecular complexes from both synthetic and experimental datasets. We show how to integrate PSRT into SA, where it provides an elegant solution to the region-of-interest problem and delivers high-contrast reconstructions that significantly improve template-based localization without any loss of high-resolution structural information. Furthermore, the locality of SA is exploited to design an importance sampling scheme which significantly speeds up the otherwise slow Monte Carlo approach. Finally, we design a new memory efficient solution for the specimen-level interior problem of cryoET, removing all associated artifacts. Copyright © 2015 Elsevier Inc. All rights reserved.
Structure-preserving and rank-revealing QR-factorizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Hansen, P.C.
1991-11-01
The rank-revealing QR-factorization (RRQR-factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under consideration. This makes the RRQR-factorization a useful tool in the numerical treatment of many rank-deficient problems in numerical linear algebra. In this paper, a framework is presented for the efficient implementation of RRQR algorithms, in particular, for sparse matrices. A sparse RRQR-algorithm should seek to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to capture safely the numerical rank. To this end, the paper proposes to compute an initial QR-factorization using amore » restricted pivoting strategy guarded by incremental condition estimation (ICE), and then applies the algorithm suggested by Chan and Foster to this QR-factorization. The column exchange strategy used in the initial QR factorization will exploit the fact that certain column exchanges do not change the sparsity structure, and compute a sparse QR-factorization that is a good approximation of the sought-after RRQR-factorization. Due to quantities produced by ICE, the Chan/Foster RRQR algorithm can be implemented very cheaply, thus verifying that the sought-after RRQR-factorization has indeed been computed. Experimental results on a model problem show that the initial QR-factorization is indeed very likely to produce RRQR-factorization.« less
Exploitation in International Paid Surrogacy Arrangements.
Wilkinson, Stephen
2016-05-01
Many critics have suggested that international paid surrogacy is exploitative. Taking such concerns as its starting point, this article asks: (1) how defensible is the claim that international paid surrogacy is exploitative and what could be done to make it less exploitative? (2) In the light of the answer to (1), how strong is the case for prohibiting it? Exploitation could in principle be dealt with by improving surrogates' pay and conditions. However, doing so may exacerbate problems with consent. Foremost amongst these is the argument that surrogates from economically disadvantaged countries cannot validly consent because their background circumstances are coercive. Several versions of this argument are examined and I conclude that at least one has some merit. The article's overall conclusion is that while ethically there is something to be concerned about, paid surrogacy is in no worse a position than many other exploitative commercial transactions which take place against a backdrop of global inequality and constrained options, such as poorly-paid and dangerous construction work. Hence, there is little reason to single surrogacy out for special condemnation. On a policy level, the case for prohibiting international commercial surrogacy is weak, despite legitimate concerns about consent and background poverty.
Exploitation in International Paid Surrogacy Arrangements
Wilkinson, Stephen
2015-01-01
Abstract Many critics have suggested that international paid surrogacy is exploitative. Taking such concerns as its starting point, this article asks: (1) how defensible is the claim that international paid surrogacy is exploitative and what could be done to make it less exploitative? (2) In the light of the answer to (1), how strong is the case for prohibiting it? Exploitation could in principle be dealt with by improving surrogates' pay and conditions. However, doing so may exacerbate problems with consent. Foremost amongst these is the argument that surrogates from economically disadvantaged countries cannot validly consent because their background circumstances are coercive. Several versions of this argument are examined and I conclude that at least one has some merit. The article's overall conclusion is that while ethically there is something to be concerned about, paid surrogacy is in no worse a position than many other exploitative commercial transactions which take place against a backdrop of global inequality and constrained options, such as poorly‐paid and dangerous construction work. Hence, there is little reason to single surrogacy out for special condemnation. On a policy level, the case for prohibiting international commercial surrogacy is weak, despite legitimate concerns about consent and background poverty. PMID:27471338
Design of Provider-Provisioned Website Protection Scheme against Malware Distribution
NASA Astrophysics Data System (ADS)
Yagi, Takeshi; Tanimoto, Naoto; Hariu, Takeo; Itoh, Mitsutaka
Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.
Evolution of a designless nanoparticle network into reconfigurable Boolean logic
NASA Astrophysics Data System (ADS)
Bose, S. K.; Lawrence, C. P.; Liu, Z.; Makarenko, K. S.; van Damme, R. M. J.; Broersma, H. J.; van der Wiel, W. G.
2015-12-01
Natural computers exploit the emergent properties and massive parallelism of interconnected networks of locally active components. Evolution has resulted in systems that compute quickly and that use energy efficiently, utilizing whatever physical properties are exploitable. Man-made computers, on the other hand, are based on circuits of functional units that follow given design rules. Hence, potentially exploitable physical processes, such as capacitive crosstalk, to solve a problem are left out. Until now, designless nanoscale networks of inanimate matter that exhibit robust computational functionality had not been realized. Here we artificially evolve the electrical properties of a disordered nanomaterials system (by optimizing the values of control voltages using a genetic algorithm) to perform computational tasks reconfigurably. We exploit the rich behaviour that emerges from interconnected metal nanoparticles, which act as strongly nonlinear single-electron transistors, and find that this nanoscale architecture can be configured in situ into any Boolean logic gate. This universal, reconfigurable gate would require about ten transistors in a conventional circuit. Our system meets the criteria for the physical realization of (cellular) neural networks: universality (arbitrary Boolean functions), compactness, robustness and evolvability, which implies scalability to perform more advanced tasks. Our evolutionary approach works around device-to-device variations and the accompanying uncertainties in performance. Moreover, it bears a great potential for more energy-efficient computation, and for solving problems that are very hard to tackle in conventional architectures.
Regime Shift in an Exploited Fish Community Related to Natural Climate Oscillations.
Auber, Arnaud; Travers-Trolet, Morgane; Villanueva, Maria Ching; Ernande, Bruno
2015-01-01
Identifying the various drivers of marine ecosystem regime shifts and disentangling their respective influence are critical tasks for understanding biodiversity dynamics and properly managing exploited living resources such as marine fish communities. Unfortunately, the mechanisms and forcing factors underlying regime shifts in marine fish communities are still largely unknown although climate forcing and anthropogenic pressures such as fishing have been suggested as key determinants. Based on a 24-year-long time-series of scientific surveys monitoring 55 fish and cephalopods species, we report here a rapid and persistent structural change in the exploited fish community of the eastern English Channel from strong to moderate dominance of small-bodied forage fish species with low temperature preferendum that occurred in the mid-1990s. This shift was related to a concomitant warming of the North Atlantic Ocean as attested by a switch of the Atlantic Multidecadal Oscillation from a cold to a warm phase. Interestingly, observed changes in the fish community structure were opposite to those classically induced by exploitation as larger fish species of higher trophic level increased in abundance. Despite not playing a direct role in the regime shift, fishing still appeared as a forcing factor affecting community structure. Moreover, although related to climate, the regime shift may have been facilitated by strong historic exploitation that certainly primed the system by favoring the large dominance of small-bodied fish species that are particularly sensitive to climatic variations. These results emphasize that particular attention should be paid to multidecadal natural climate variability and its interactions with both fishing and climate warming when aiming at sustainable exploitation and ecosystem conservation.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
A second-order shock-adaptive Godunov scheme based on the generalized Lagrangian formulation
NASA Astrophysics Data System (ADS)
Lepage, Claude
Application of the Godunov scheme to the Euler equations of gas dynamics, based on the Eulerian formulation of flow, smears discontinuities (especially sliplines) over several computational cells, while the accuracy in the smooth flow regions is of the order of a function of the cell width. Based on the generalized Lagrangian formulation (GLF), the Godunov scheme yields far superior results. By the use of coordinate streamlines in the GLF, the slipline (itself a streamline) is resolved crisply. Infinite shock resolution is achieved through the splitting of shock cells, while the accuracy in the smooth flow regions is improved using a nonconservative formulation of the governing equations coupled to a second order extension of the Godunov scheme. Furthermore, GLF requires no grid generation for boundary value problems and the simple structure of the solution to the Riemann problem in the GLF is exploited in the numerical implementation of the shock adaptive scheme. Numerical experiments reveal high efficiency and unprecedented resolution of shock and slipline discontinuities.
NASA Astrophysics Data System (ADS)
Torre, Gabriele; Schwartz, Richard; Piana, Michele; Massone, Anna Maria; Benvenuto, Federico
2016-05-01
The fine spatial resolution of the SDO AIA CCD's is often destroyed by the charge in saturated pixels overflowing into a swath of neighboring cells during fast rising solar flares. Automated exposure control can only mitigate this issue to a degree and it has other deleterious effects. Our method addresses the desaturation problem for AIA images as an image reconstruction problem in which the information content of the diffraction fringes, generated by the interaction between the incoming radiation and the hardware of the spacecraft, is exploited to recover the true image intensities within the primary saturated core of the image. This methodology takes advantage of some well defined techniques like cross-correlation and the Expectation Maximization method to invert the direct relation between the diffraction fringes intensities and the true flux intensities. During this talk a complete overview on the structure of the method will be provided, besides some reliability tests obtained by its application against synthetic and real data.
Lanczos eigensolution method for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1991-01-01
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.
The p-version of the finite element method in incremental elasto-plastic analysis
NASA Technical Reports Server (NTRS)
Holzer, Stefan M.; Yosibash, Zohar
1993-01-01
Whereas the higher-order versions of the finite elements method (the p- and hp-version) are fairly well established as highly efficient methods for monitoring and controlling the discretization error in linear problems, little has been done to exploit their benefits in elasto-plastic structural analysis. Aspects of incremental elasto-plastic finite element analysis which are particularly amenable to improvements by the p-version is discussed. These theoretical considerations are supported by several numerical experiments. First, an example for which an analytical solution is available is studied. It is demonstrated that the p-version performs very well even in cycles of elasto-plastic loading and unloading, not only as compared to the traditional h-version but also in respect to the exact solution. Finally, an example of considerable practical importance - the analysis of a cold-worked lug - is presented which demonstrates how the modeling tools offered by higher-order finite element techniques can contribute to an improved approximation of practical problems.
Decellularized scaffold of cryopreserved rat kidney retains its recellularization potential.
Chani, Baldeep; Puri, Veena; Sobti, Ranbir C; Jha, Vivekanand; Puri, Sanjeev
2017-01-01
The multi-cellular nature of renal tissue makes it the most challenging organ for regeneration. Therefore, till date whole organ transplantations remain the definitive treatment for the end stage renal disease (ESRD). The shortage of available organs for the transplantation has, thus, remained a major concern as well as an unsolved problem. In this regard generation of whole organ scaffold through decellularization followed by regeneration of the whole organ by recellularization is being viewed as a potential alternative for generating functional tissues. Despite its growing interest, the optimal processing to achieve functional organ still remains unsolved. The biggest challenge remains is the time line for obtaining kidney. Keeping these facts in mind, we have assessed the effects of cryostorage (3 months) on renal tissue architecture and its potential for decellularization and recellularization in comparison to the freshly isolated kidneys. The light microscopy exploiting different microscopic stains as well as immuno-histochemistry and Scanning electron microscopy (SEM) demonstrated that ECM framework is well retained following kidney cryopreservation. The strength of these structures was reinforced by calculating mechanical stress which confirmed the similarity between the freshly isolated and cryopreserved tissue. The recellularization of these bio-scaffolds, with mesenchymal stem cells quickly repopulated the decellularized structures irrespective of the kidneys status, i.e. freshly isolated or the cryopreserved. The growth pattern employing mesenchymal stem cells demonstrated their equivalent recellularization potential. Based on these observations, it may be concluded that cryopreserved kidneys can be exploited as scaffolds for future development of functional organ.
A perspective of nanotechnology in hypersensitivity reactions including drug allergy.
Montañez, Maria Isabel; Ruiz-Sanchez, Antonio J; Perez-Inestrosa, Ezequiel
2010-08-01
We provide an overview of the application of the concepts of nanoscience and nanotechnology as a novel scientific approach to the area of nanomedicine related to the domain of the immune system. Particular emphasis will be paid to studies on drug allergy reactions. Several well defined chemical structures arranged in the dimension of the nanoscale are currently being studied for biomedical purposes. By interacting with the immune system, some of these show promising applications as vaccines, diagnostic tools and activators/effectors of the immune response. Even a brief listing of some key applications of nanostructured materials shows how broad and intense this area of nanomedicine is. As a result of the development of nanoscience and nanotechnology applied to medicine, new approaches can be envisioned for problems related to the modulation of the immune response, as well as in immunodiagnosis, and to design new tools to solve related medical challenges. Nanoparticles offer unique advantages with which to exploit new properties and for materials to play a major role in new diagnostic techniques and therapies. Fullerene-C60 and multivalent functionalized gold nanoparticles of various sizes have led to new tools and opened up new ways to study and interact with the immune system. Some of the most versatile nanostructures are dendrimers. In their interaction with the immune system they can naturally occurring macromolecules, taking advantage of the fact that dendrimers can be synthesized into nanosized structures. Their multivalence can be successfully exploited in vaccines and diagnostic tests for allergic reactions.
Biomimetic surface structuring using cylindrical vector femtosecond laser beams
Skoulas, Evangelos; Manousaki, Alexandra; Fotakis, Costas; Stratakis, Emmanuel
2017-01-01
We report on a new, single-step and scalable method to fabricate highly ordered, multi-directional and complex surface structures that mimic the unique morphological features of certain species found in nature. Biomimetic surface structuring was realized by exploiting the unique and versatile angular profile and the electric field symmetry of cylindrical vector (CV) femtosecond (fs) laser beams. It is shown that, highly controllable, periodic structures exhibiting sizes at nano-, micro- and dual- micro/nano scales can be directly written on Ni upon line and large area scanning with radial and azimuthal polarization beams. Depending on the irradiation conditions, new complex multi-directional nanostructures, inspired by the Shark’s skin morphology, as well as superhydrophobic dual-scale structures mimicking the Lotus’ leaf water repellent properties can be attained. It is concluded that the versatility and features variations of structures formed is by far superior to those obtained via laser processing with linearly polarized beams. More important, by exploiting the capabilities offered by fs CV fields, the present technique can be further extended to fabricate even more complex and unconventional structures. We believe that our approach provides a new concept in laser materials processing, which can be further exploited for expanding the breadth and novelty of applications. PMID:28327611
NASA Astrophysics Data System (ADS)
Broido, V. L.; Krasnoshtanov, S. U.
2018-03-01
The problems of a choice of rational technoloqy and materials for restoring crucial parts and large-sized welded constructions of dredges and other mining machines with use of methods of welding and surfasing are considered. Welding and surfacing occupy a significant share in the overall labor intensity of performing repair work at mining enterprises. Both manual arc welding and surfacing as well as mechanized methods are used, which ensure a 24-fold increase in productivity. The work shows examples of using the technology of restoring parts and structures at gold mining enterprises in Irkutsk region. Some marks of welding and surfasing materials are shown, which production is mastered by Irkutsk Heavy Engineering Plant (IZTM)
Coral reef management and conservation in light of rapidly evolving ecological paradigms.
Mumby, Peter J; Steneck, Robert S
2008-10-01
The decline of many coral reef ecosystems in recent decades surprised experienced managers and researchers. It shattered old paradigms that these diverse ecosystems are spatially uniform and temporally stable on the scale of millennia. We now see reefs as heterogeneous, fragile, globally stressed ecosystems structured by strong positive or negative feedback processes. We review the causes and consequences of reef decline and ask whether management practices are addressing the problem at appropriate scales. We conclude that both science and management are currently failing to address the comanagement of extractive activities and ecological processes that drive ecosystems (e.g. productivity and herbivory). Most reef conservation efforts are directed toward reserve implementation, but new approaches are needed to sustain ecosystem function in exploited areas.
Floquet-Engineered Valleytronics in Dirac Systems.
Kundu, Arijit; Fertig, H A; Seradjeh, Babak
2016-01-08
Valley degrees of freedom offer a potential resource for quantum information processing if they can be effectively controlled. We discuss an optical approach to this problem in which intense light breaks electronic symmetries of a two-dimensional Dirac material. The resulting quasienergy structures may then differ for different valleys, so that the Floquet physics of the system can be exploited to produce highly polarized valley currents. This physics can be utilized to realize a valley valve whose behavior is determined optically. We propose a concrete way to achieve such valleytronics in graphene as well as in a simple model of an inversion-symmetry broken Dirac material. We study the effect numerically and demonstrate its robustness against moderate disorder and small deviations in optical parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by everymore » processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.« less
Microarray missing data imputation based on a set theoretic framework and biological knowledge.
Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong
2006-01-01
Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...
2016-06-30
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Some Notes on Black Alcoholism Prevention.
ERIC Educational Resources Information Center
Watts, Thomas D.; Wright, Roosevelt, Jr.
1985-01-01
Briefly reviews the complexity of the problem of alcoholism in Blacks and the small amount of research available. Discusses related social policies, economic exploitation, and crime related to drinking. (JAC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R{sup n} {yields} R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
Automated Software Vulnerability Analysis
NASA Astrophysics Data System (ADS)
Sezer, Emre C.; Kil, Chongkyung; Ning, Peng
Despite decades of research, software continues to have vulnerabilities. Successful exploitations of these vulnerabilities by attackers cost millions of dollars to businesses and individuals. Unfortunately, most effective defensive measures, such as patching and intrusion prevention systems, require an intimate knowledge of the vulnerabilities. Many systems for detecting attacks have been proposed. However, the analysis of the exploited vulnerabilities is left to security experts and programmers. Both the human effortinvolved and the slow analysis process are unfavorable for timely defensive measure to be deployed. The problem is exacerbated by zero-day attacks.
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
A technique for solving constraint satisfaction problems using Prolog's definite clause grammars
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1988-01-01
A new technique for solving constraint satisfaction problems using Prolog's definite clause grammars is presented. It exploits the fact that the grammar rule notation can be viewed as a state exchange notation. The novel feature of the technique is that it can perform informed as well as blind search. It provides the Prolog programmer with a new technique for application to a wide range of design, scheduling, and planning problems.
NASA Astrophysics Data System (ADS)
Huang, Daniel Z.; De Santis, Dante; Farhat, Charbel
2018-07-01
The Finite Volume method with Exact two-material Riemann Problems (FIVER) is both a computational framework for multi-material flows characterized by large density jumps, and an Embedded Boundary Method (EBM) for computational fluid dynamics and highly nonlinear Fluid-Structure Interaction (FSI) problems. This paper deals with the EBM aspect of FIVER. For FSI problems, this EBM has already demonstrated the ability to address viscous effects along wall boundaries, and large deformations and topological changes of such boundaries. However, like for most EBMs - also known as immersed boundary methods - the performance of FIVER in the vicinity of a wall boundary can be sensitive with respect to the position and orientation of this boundary relative to the embedding mesh. This is mainly due to ill-conditioning issues that arise when an embedded interface becomes too close to a node of the embedding mesh, which may lead to spurious oscillations in the computed solution gradients at the wall boundary. This paper resolves these issues by introducing an alternative definition of the active/inactive status of a mesh node that leads to the removal of all sources of potential ill-conditioning from all spatial approximations performed by FIVER in the vicinity of a fluid-structure interface. It also makes two additional contributions. The first one is a new procedure for constructing the fluid-structure half Riemann problem underlying the semi-discretization by FIVER of the convective fluxes. This procedure eliminates one extrapolation from the conventional treatment of the wall boundary conditions and replaces it by an interpolation, which improves robustness. The second contribution is a post-processing algorithm for computing quantities of interest at the wall that achieves smoothness in the computed solution and its gradients. Lessons learned from these enhancements and contributions that are triggered by the new definition of the status of a mesh node are then generalized and exploited to eliminate from the original version of the FIVER method its sensitivities with respect to both of the position and orientation of the wall boundary relative to the embedding mesh, while maintaining the original definition of the status of a mesh node. This leads to a family of second-generation FIVER methods whose performance is illustrated in this paper for several flow and FSI problems. These include a challenging flow problem over a bird wing characterized by a feather-induced surface roughness, and a complex flexible flapping wing problem for which experimental data is available.
On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods
NASA Technical Reports Server (NTRS)
Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.
2003-01-01
Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.
ERIC Educational Resources Information Center
Coles, Flournoy A., Jr.
1973-01-01
This article discusses some of the more important economic problems of minorities in the United States, identifying the economics of minorities with the economics of poverty, discrimination, exploitation, urban life, and alienation. (JM)
Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms.
Helms, Lucas; Clune, Jeff
2017-01-01
Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.
NASA Astrophysics Data System (ADS)
Fios, Frederikus
2017-04-01
Oenbit village is an area that is located in the district of Timor Tengah Utara (TTU), Timor Island, East Nusa Tenggara Province, Indonesia. In Oenbit ongoing a conflict between the economic interests of some parties namely the government, corporation and the local indigenous community. Government of Timor Tengah Utara give legal permission to the Elgari Resources Indonesia (ERI) Company to exploit the mining of Manganese in Oenbit Village which informally is the ancestral land of indigenous peoples Oenbit hereditary called pusuf kelef and Kot-tau niap-tau (king land). Oenbit society has an ethical belief that the ancestral land Oenbit should not be produced by outside parties besides the local community on the orders of the king. Manganese exploitation in Oenbit Village cause problems contradictorily interesting to reflect on the ethical-philosophical. This paper aims to reflect the ethical position against cases of exploitation of manganese in the Oenbit Village with focuses on the local government’s decision to issue a permit exploitation and ERI Company exploit Mangan assumed unethical traditional indigenous tribe Oenbit. The study found that the district government and ERI Company has violated the public ethics and society traditional law, especially the rights of local indigenous communities by exploiting manganese material. The method used is the reflection of philosophy with ethical approaches and relevant ethical theories.
Microwave tomography for GPR data processing in archaeology and cultural heritages diagnostics
NASA Astrophysics Data System (ADS)
Soldovieri, F.
2009-04-01
Ground Penetrating Radar (GPR) is one of the most feasible and friendly instrumentation to detect buried remains and perform diagnostics of archaeological structures with the aim of detecting hidden objects (defects, voids, constructive typology; etc..). In fact, GPR technique allows to perform measurements over large areas in a very fast way thanks to a portable instrumentation. Despite of the widespread exploitation of the GPR as data acquisition system, many difficulties arise in processing GPR data so to obtain images reliable and easily interpretable by the end-users. This difficulty is exacerbated when no a priori information is available as for example arises in the case of historical heritages for which the knowledge of the constructive modalities and materials of the structure might be completely missed. A possible answer to the above cited difficulties resides in the development and the exploitation of microwave tomography algorithms [1, 2], based on more refined electromagnetic scattering model with respect to the ones usually adopted in the classic radaristic approach. By exploitation of the microwave tomographic approach, it is possible to gain accurate and reliable "images" of the investigated structure in order to detect, localize and possibly determine the extent and the geometrical features of the embedded objects. In this framework, the adoption of simplified models of the electromagnetic scattering appears very convenient for practical and theoretical reasons. First, the linear inversion algorithms are numerically efficient thus allowing to investigate domains large in terms of the probing wavelength in a quasi real- time also in the case of 3D case also by adopting schemes based on the combination of 2D reconstruction [3]. In addition, the solution approaches are very robust against the uncertainties in the parameters of the measurement configuration and on the investigated scenario. From a theoretical point of view, the linear models allow further advantages such as: the absence of the false solutions (a question to be arisen in non linear inverse problems); the exploitation of well known regularization tools for achieving a stable solution of the problem; the possibility to analyze the reconstruction performances of the algorithm once the measurement configuration and the properties of the host medium are known. Here, we will present the main features and the reconstruction results of a linear inversion algorithm based on the Born approximation in realistic applications in archaeology and cultural heritage diagnostics. Born model is useful when penetrable objects are under investigations. As well known, the Born Approximation is used to solve the forward problem, that is the determination of the scattered field from a known object under the hypothesis of weak scatterer, i.e. an object whose dielectric permittivity is slightly different from the one of the host medium and whose extent is small in term of probing wavelength. Differently, for the inverse scattering problem, the above hypotheses can be relaxed at the cost to renounce to a "quantitative reconstruction" of the object. In fact, as already shown by results in realistic conditions [4, 5], the adoption of a Born model inversion scheme allows to detect, to localize and to determine the geometry of the object also in the case of not weak scattering objects. [1] R. Persico, R. Bernini, F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the Born approximation", IEEE Trans. Antennas and Propagation, vol. 53, no.6, pp. 1875-1887, June 2005. [2] F. Soldovieri, J. Hugenschmidt, R. Persico and G. Leone, "A linear inverse scattering algorithm for realistic GPR applications", Near Surface Geophysics, vol. 5, no. 1, pp. 29-42, February 2007. [3] R. Solimene, F. Soldovieri, G. Prisco, R.Pierri, "Three-Dimensional Microwave Tomography by a 2-D Slice-Based Reconstruction Algorithm", IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 4, pp. 556 - 560, Oct. 2007. [4] L. Orlando, F. Soldovieri, "Two different approaches for georadar data processing: a case study in archaeological prospecting", Journal of Applied Geophysics, vol. 64, pp. 1-13, March 2008. [5] F. Soldovieri, M. Bavusi, L. Crocco, S. Piscitelli, A. Giocoli, F. Vallianatos, S. Pantellis, A. Sarris, "A comparison between two GPR data processing techniques for fracture detection and characterization", Proc. of 70th EAGE Conference & Exhibition, Rome, Italy, 9 - 12 June 2008
NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Max La Cour; Villa, Umberto E.; Engsig-Karup, Allan P.
The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, ourmore » FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.« less
Cartesian control of redundant robots
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.
1989-01-01
A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.
Motion and force control of multiple robotic manipulators
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz-Delgado, Kenneth
1992-01-01
This paper addresses the motion and force control problem of multiple robot arms manipulating a cooperatively held object. A general control paradigm is introduced which decouples the motion and force control problems. For motion control, different control strategies are constructed based on the variables used as the control input in the controller design. There are three natural choices; acceleration of a generalized coordinate, arm tip force vectors, and the joint torques. The first two choices require full model information but produce simple models for the control design problem. The last choice results in a class of relatively model independent control laws by exploiting the Hamiltonian structure of the open loop system. The motion control only determines the joint torque to within a manifold, due to the multiple-arm kinematic constraint. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, an optimization can be performed to best allocate the desired and effector control force to the joint actuators. The other possibility is to control the internal force about some set point. It is shown that effective force regulation can be achieved even if little model information is available.
NASA Astrophysics Data System (ADS)
Wen, Fang-Qing; Zhang, Gong; Ben, De
2015-11-01
This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.
Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry; ...
2016-10-27
Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less
McGowan, Sheena; Porter, Corrine J; Lowther, Jonathan; Stack, Colin M; Golding, Sarah J; Skinner-Adams, Tina S; Trenholme, Katharine R; Teuscher, Franka; Donnelly, Sheila M; Grembecka, Jolanta; Mucha, Artur; Kafarski, Pawel; Degori, Ross; Buckle, Ashley M; Gardiner, Donald L; Whisstock, James C; Dalton, John P
2009-02-24
Plasmodium falciparum parasites are responsible for the major global disease malaria, which results in >2 million deaths each year. With the rise of drug-resistant malarial parasites, novel drug targets and lead compounds are urgently required for the development of new therapeutic strategies. Here, we address this important problem by targeting the malarial neutral aminopeptidases that are involved in the terminal stages of hemoglobin digestion and essential for the provision of amino acids used for parasite growth and development within the erythrocyte. We characterize the structure and substrate specificity of one such aminopeptidase, PfA-M1, a validated drug target. The X-ray crystal structure of PfA-M1 alone and in complex with the generic inhibitor, bestatin, and a phosphinate dipeptide analogue with potent in vitro and in vivo antimalarial activity, hPheP[CH(2)]Phe, reveals features within the protease active site that are critical to its function as an aminopeptidase and can be exploited for drug development. These results set the groundwork for the development of antimalarial therapeutics that target the neutral aminopeptidases of the parasite.
Structural basis for the inhibition of the essential Plasmodium falciparum M1 neutral aminopeptidase
McGowan, Sheena; Porter, Corrine J.; Lowther, Jonathan; Stack, Colin M.; Golding, Sarah J.; Skinner-Adams, Tina S.; Trenholme, Katharine R.; Teuscher, Franka; Donnelly, Sheila M.; Grembecka, Jolanta; Mucha, Artur; Kafarski, Pawel; DeGori, Ross; Buckle, Ashley M.; Gardiner, Donald L.; Whisstock, James C.; Dalton, John P.
2009-01-01
Plasmodium falciparum parasites are responsible for the major global disease malaria, which results in >2 million deaths each year. With the rise of drug-resistant malarial parasites, novel drug targets and lead compounds are urgently required for the development of new therapeutic strategies. Here, we address this important problem by targeting the malarial neutral aminopeptidases that are involved in the terminal stages of hemoglobin digestion and essential for the provision of amino acids used for parasite growth and development within the erythrocyte. We characterize the structure and substrate specificity of one such aminopeptidase, PfA-M1, a validated drug target. The X-ray crystal structure of PfA-M1 alone and in complex with the generic inhibitor, bestatin, and a phosphinate dipeptide analogue with potent in vitro and in vivo antimalarial activity, hPheP[CH2]Phe, reveals features within the protease active site that are critical to its function as an aminopeptidase and can be exploited for drug development. These results set the groundwork for the development of antimalarial therapeutics that target the neutral aminopeptidases of the parasite. PMID:19196988
Shivkumar, Sabyasachi; Muralidharan, Vignesh; Chakravarthy, V Srinivasa
2017-01-01
Basal ganglia circuit is an important subcortical system of the brain thought to be responsible for reward-based learning. Striatum, the largest nucleus of the basal ganglia, serves as an input port that maps cortical information. Microanatomical studies show that the striatum is a mosaic of specialized input-output structures called striosomes and regions of the surrounding matrix called the matrisomes. We have developed a computational model of the striatum using layered self-organizing maps to capture the center-surround structure seen experimentally and explain its functional significance. We believe that these structural components could build representations of state and action spaces in different environments. The striatum model is then integrated with other components of basal ganglia, making it capable of solving reinforcement learning tasks. We have proposed a biologically plausible mechanism of action-based learning where the striosome biases the matrisome activity toward a preferred action. Several studies indicate that the striatum is critical in solving context dependent problems. We build on this hypothesis and the proposed model exploits the modularity of the striatum to efficiently solve such tasks.
Anomalous sea surface structures as an object of statistical topography
NASA Astrophysics Data System (ADS)
Klyatskin, V. I.; Koshel, K. V.
2015-06-01
By exploiting ideas of statistical topography, we analyze the stochastic boundary problem of emergence of anomalous high structures on the sea surface. The kinematic boundary condition on the sea surface is assumed to be a closed stochastic quasilinear equation. Applying the stochastic Liouville equation, and presuming the stochastic nature of a given hydrodynamic velocity field within the diffusion approximation, we derive an equation for a spatially single-point, simultaneous joint probability density of the surface elevation field and its gradient. An important feature of the model is that it accounts for stochastic bottom irregularities as one, but not a single, perturbation. Hence, we address the assumption of the infinitely deep ocean to obtain statistic features of the surface elevation field and the squared elevation gradient field. According to the calculations, we show that clustering in the absolute surface elevation gradient field happens with the unit probability. It results in the emergence of rare events such as anomalous high structures and deep gaps on the sea surface almost in every realization of a stochastic velocity field.
Shivkumar, Sabyasachi; Muralidharan, Vignesh; Chakravarthy, V. Srinivasa
2017-01-01
Basal ganglia circuit is an important subcortical system of the brain thought to be responsible for reward-based learning. Striatum, the largest nucleus of the basal ganglia, serves as an input port that maps cortical information. Microanatomical studies show that the striatum is a mosaic of specialized input-output structures called striosomes and regions of the surrounding matrix called the matrisomes. We have developed a computational model of the striatum using layered self-organizing maps to capture the center-surround structure seen experimentally and explain its functional significance. We believe that these structural components could build representations of state and action spaces in different environments. The striatum model is then integrated with other components of basal ganglia, making it capable of solving reinforcement learning tasks. We have proposed a biologically plausible mechanism of action-based learning where the striosome biases the matrisome activity toward a preferred action. Several studies indicate that the striatum is critical in solving context dependent problems. We build on this hypothesis and the proposed model exploits the modularity of the striatum to efficiently solve such tasks. PMID:28680395
Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.
Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin
2013-09-01
Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.
Nonlinear Inference in Partially Observed Physical Systems and Deep Neural Networks
NASA Astrophysics Data System (ADS)
Rozdeba, Paul J.
The problem of model state and parameter estimation is a significant challenge in nonlinear systems. Due to practical considerations of experimental design, it is often the case that physical systems are partially observed, meaning that data is only available for a subset of the degrees of freedom required to fully model the observed system's behaviors and, ultimately, predict future observations. Estimation in this context is highly complicated by the presence of chaos, stochasticity, and measurement noise in dynamical systems. One of the aims of this dissertation is to simultaneously analyze state and parameter estimation in as a regularized inverse problem, where the introduction of a model makes it possible to reverse the forward problem of partial, noisy observation; and as a statistical inference problem using data assimilation to transfer information from measurements to the model states and parameters. Ultimately these two formulations achieve the same goal. Similar aspects that appear in both are highlighted as a means for better understanding the structure of the nonlinear inference problem. An alternative approach to data assimilation that uses model reduction is then examined as a way to eliminate unresolved nonlinear gating variables from neuron models. In this formulation, only measured variables enter into the model, and the resulting errors are themselves modeled by nonlinear stochastic processes with memory. Finally, variational annealing, a data assimilation method previously applied to dynamical systems, is introduced as a potentially useful tool for understanding deep neural network training in machine learning by exploiting similarities between the two problems.
Xia, Fei; Jin, Guoqing
2014-06-01
PKNOTS is a most famous benchmark program and has been widely used to predict RNA secondary structure including pseudoknots. It adopts the standard four-dimensional (4D) dynamic programming (DP) method and is the basis of many variants and improved algorithms. Unfortunately, the O(N(6)) computing requirements and complicated data dependency greatly limits the usefulness of PKNOTS package with the explosion in gene database size. In this paper, we present a fine-grained parallel PKNOTS package and prototype system for accelerating RNA folding application based on FPGA chip. We adopted a series of storage optimization strategies to resolve the "Memory Wall" problem. We aggressively exploit parallel computing strategies to improve computational efficiency. We also propose several methods that collectively reduce the storage requirements for FPGA on-chip memory. To the best of our knowledge, our design is the first FPGA implementation for accelerating 4D DP problem for RNA folding application including pseudoknots. The experimental results show a factor of more than 50x average speedup over the PKNOTS-1.08 software running on a PC platform with Intel Core2 Q9400 Quad CPU for input RNA sequences. However, the power consumption of our FPGA accelerator is only about 50% of the general-purpose micro-processors.
NASA Astrophysics Data System (ADS)
Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.
2014-01-01
Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.
A Comparison of PETSC Library and HPF Implementations of an Archetypal PDE Computation
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Keyes, David E.; Mehrotra, Piyush
1997-01-01
Two paradigms for distributed-memory parallel computation that free the application programmer from the details of message passing are compared for an archetypal structured scientific computation a nonlinear, structured-grid partial differential equation boundary value problem using the same algorithm on the same hardware. Both paradigms, parallel libraries represented by Argonne's PETSC, and parallel languages represented by the Portland Group's HPF, are found to be easy to use for this problem class, and both are reasonably effective in exploiting concurrency after a short learning curve. The level of involvement required by the application programmer under either paradigm includes specification of the data partitioning (corresponding to a geometrically simple decomposition of the domain of the PDE). Programming in SPAM style for the PETSC library requires writing the routines that discretize the PDE and its Jacobian, managing subdomain-to-processor mappings (affine global- to-local index mappings), and interfacing to library solver routines. Programming for HPF requires a complete sequential implementation of the same algorithm, introducing concurrency through subdomain blocking (an effort similar to the index mapping), and modest experimentation with rewriting loops to elucidate to the compiler the latent concurrency. Correctness and scalability are cross-validated on up to 32 nodes of an IBM SP2.
Skipworth, R J E; Terrace, J D; Fulton, L A; Anderson, D N
2008-11-01
Imposed reductions in working hours will impact significantly on the ability of surgical trainees to achieve competency. The objective of this study was to obtain the opinions of Scottish surgical trainees concerning the training they receive, in order to inform and guide the development of future, high-standard training programmes. An anonymous questionnaire was sent to basic surgical trainees on the Edinburgh, Aberdeen and Dundee Basic Surgical Rotations commencing after August 2002. Thirty six questionnaire responses were analysed. Very few of the returned comments were complimentary to the existing training structure; indeed, most comments demonstrated significant trainee disappointment. Despite "regular" exposure to operative sessions, training tutorials and named consultant trainers, the most common concern was a perceived lack of high-quality, structured, operative exposure and responsibility. Textbooks and journals remain the most frequently utilised learning tools, with high-tech systems such as teleconferencing, videos, CD-ROMS, and DVDs being poorly exploited. Current surgical training is not meeting the expectation of the majority of its trainees. To solve this problem will require extensive revision of attitudes and current educational format. A greater emphasis on the integration of 21st century learning tools in the training programme may help bridge this gap.
The Significance of G Protein-Coupled Receptor Crystallography for Drug Discovery
Salon, John A.; Lodowski, David T.
2011-01-01
Crucial as molecular sensors for many vital physiological processes, seven-transmembrane domain G protein-coupled receptors (GPCRs) comprise the largest family of proteins targeted by drug discovery. Together with structures of the prototypical GPCR rhodopsin, solved structures of other liganded GPCRs promise to provide insights into the structural basis of the superfamily's biochemical functions and assist in the development of new therapeutic modalities and drugs. One of the greatest technical and theoretical challenges to elucidating and exploiting structure-function relationships in these systems is the emerging concept of GPCR conformational flexibility and its cause-effect relationship for receptor-receptor and receptor-effector interactions. Such conformational changes can be subtle and triggered by relatively small binding energy effects, leading to full or partial efficacy in the activation or inactivation of the receptor system at large. Pharmacological dogma generally dictates that these changes manifest themselves through kinetic modulation of the receptor's G protein partners. Atomic resolution information derived from increasingly available receptor structures provides an entrée to the understanding of these events and practically applying it to drug design. Supported by structure-activity relationship information arising from empirical screening, a unified structural model of GPCR activation/inactivation promises to both accelerate drug discovery in this field and improve our fundamental understanding of structure-based drug design in general. This review discusses fundamental problems that persist in drug design and GPCR structural determination. PMID:21969326
Design and performance of optimal detectors for guided wave structural health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dib, G.; Udpa, L.
2016-01-01
Ultrasonic guided wave measurements in a long term structural health monitoring system are affected by measurement noise, environmental conditions, transducer aging and malfunction. This results in measurement variability which affects detection performance, especially in complex structures where baseline data comparison is required. This paper derives the optimal detector structure, within the framework of detection theory, where a guided wave signal at the sensor is represented by a single feature value that can be used for comparison with a threshold. Three different types of detectors are derived depending on the underlying structure’s complexity: (i) Simple structures where defect reflections can bemore » identified without the need for baseline data; (ii) Simple structures that require baseline data due to overlap of defect scatter with scatter from structural features; (iii) Complex structure with dense structural features that require baseline data. The detectors are derived by modeling the effects of variabilities and uncertainties as random processes. Analytical solutions for the performance of detectors in terms of the probability of detection and false alarm are derived. A finite element model is used to generate guided wave signals and the performance results of a Monte-Carlo simulation are compared with the theoretical performance. initial results demonstrate that the problems of signal complexity and environmental variability can in fact be exploited to improve detection performance.« less
Exploiting the On-Campus Boiler House.
ERIC Educational Resources Information Center
Woods, Donald R.; And Others
1986-01-01
Shows how a university utility building ("boiler house") is used in a chemical engineering course for computer simulations, mathematical modeling and process problem exercises. Student projects involving the facility are also discussed. (JN)
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
Exploiting the Dynamics of Soft Materials for Machine Learning
Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2018-01-01
Abstract Soft materials are increasingly utilized for various purposes in many engineering applications. These materials have been shown to perform a number of functions that were previously difficult to implement using rigid materials. Here, we argue that the diverse dynamics generated by actuating soft materials can be effectively used for machine learning purposes. This is demonstrated using a soft silicone arm through a technique of multiplexing, which enables the rich transient dynamics of the soft materials to be fully exploited as a computational resource. The computational performance of the soft silicone arm is examined through two standard benchmark tasks. Results show that the soft arm compares well to or even outperforms conventional machine learning techniques under multiple conditions. We then demonstrate that this system can be used for the sensory time series prediction problem for the soft arm itself, which suggests its immediate applicability to a real-world machine learning problem. Our approach, on the one hand, represents a radical departure from traditional computational methods, whereas on the other hand, it fits nicely into a more general perspective of computation by way of exploiting the properties of physical materials in the real world. PMID:29708857
Exploiting the Dynamics of Soft Materials for Machine Learning.
Nakajima, Kohei; Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2018-06-01
Soft materials are increasingly utilized for various purposes in many engineering applications. These materials have been shown to perform a number of functions that were previously difficult to implement using rigid materials. Here, we argue that the diverse dynamics generated by actuating soft materials can be effectively used for machine learning purposes. This is demonstrated using a soft silicone arm through a technique of multiplexing, which enables the rich transient dynamics of the soft materials to be fully exploited as a computational resource. The computational performance of the soft silicone arm is examined through two standard benchmark tasks. Results show that the soft arm compares well to or even outperforms conventional machine learning techniques under multiple conditions. We then demonstrate that this system can be used for the sensory time series prediction problem for the soft arm itself, which suggests its immediate applicability to a real-world machine learning problem. Our approach, on the one hand, represents a radical departure from traditional computational methods, whereas on the other hand, it fits nicely into a more general perspective of computation by way of exploiting the properties of physical materials in the real world.
NASA Astrophysics Data System (ADS)
Nan, Tongchao; Li, Kaixuan; Wu, Jichun; Yin, Lihe
2018-04-01
Sustainability has been one of the key criteria of effective water exploitation. Groundwater exploitation and water-table decline at Haolebaoji water source site in the Ordos basin in NW China has drawn public attention due to concerns about potential threats to ecosystems and grazing land in the area. To better investigate the impact of production wells at Haolebaoji on the water table, an adapted algorithm called the random walk on grid method (WOG) is applied to simulate the hydraulic head in the unconfined and confined aquifers. This is the first attempt to apply WOG to a real groundwater problem. The method can not only evaluate the head values but also the contributions made by each source/sink term. One is allowed to analyze the impact of source/sink terms just as if one had an analytical solution. The head values evaluated by WOG match the values derived from the software Groundwater Modeling System (GMS). It suggests that WOG is effective and applicable in a heterogeneous aquifer with respect to practical problems, and the resultant information is useful for groundwater management.
Exploiting semantics for sensor re-calibration in event detection systems
NASA Astrophysics Data System (ADS)
Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini
2008-01-01
Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Birgin, Ernesto G.; Ronconi, Débora P.
2012-10-01
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
The consequences of balanced harvesting of fish communities
Jacobsen, Nis S.; Gislason, Henrik; Andersen, Ken H.
2014-01-01
Balanced harvesting, where species or individuals are exploited in accordance with their productivity, has been proposed as a way to minimize the effects of fishing on marine fish communities and ecosystems. This calls for a thorough examination of the consequences balanced harvesting has on fish community structure and yield. We use a size- and trait-based model that resolves individual interactions through competition and predation to compare balanced harvesting with traditional selective harvesting, which protects juvenile fish from fishing. Four different exploitation patterns, generated by combining selective or unselective harvesting with balanced or unbalanced fishing, are compared. We find that unselective balanced fishing, where individuals are exploited in proportion to their productivity, produces a slightly larger total maximum sustainable yield than the other exploitation patterns and, for a given yield, the least change in the relative biomass composition of the fish community. Because fishing reduces competition, predation and cannibalism within the community, the total maximum sustainable yield is achieved at high exploitation rates. The yield from unselective balanced fishing is dominated by small individuals, whereas selective fishing produces a much higher proportion of large individuals in the yield. Although unselective balanced fishing is predicted to produce the highest total maximum sustainable yield and the lowest impact on trophic structure, it is effectively a fishery predominantly targeting small forage fish. PMID:24307676
Structured penalties for functional linear models-partially empirical eigenvectors for regression.
Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding
2012-01-01
One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
Conditional random fields for pattern recognition applied to structured data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Skurikhin, Alexei
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
Conditional random fields for pattern recognition applied to structured data
Burr, Tom; Skurikhin, Alexei
2015-07-14
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
Gerassi, Lara
2015-01-01
In the last 15 years, terms such as prostitution, sex trafficking, sexual exploitation, modern-day slavery, and sex work have elicited much confusion and debate as to their definitions. Consequently several challenges have emerged for both law enforcement in the prosecution of criminals and practitioners in service provision. This article reviews the state of the literature with regard to domestic, sexual exploitation among women and girls in the United States and seeks to (1) provide definitions and describe the complexity of all terms relating to domestic sexual exploitation of women and girls in the United States, (2) explore available national prevalence data according to the definitions provided, and (3) review the evidence of mental health, social, and structural risk factors at the micro-, mezzo-, and macrolevels. PMID:26726289
Gerassi, Lara
In the last 15 years, terms such as prostitution, sex trafficking, sexual exploitation, modern-day slavery, and sex work have elicited much confusion and debate as to their definitions. Consequently several challenges have emerged for both law enforcement in the prosecution of criminals and practitioners in service provision. This article reviews the state of the literature with regard to domestic, sexual exploitation among women and girls in the United States and seeks to (1) provide definitions and describe the complexity of all terms relating to domestic sexual exploitation of women and girls in the United States, (2) explore available national prevalence data according to the definitions provided, and (3) review the evidence of mental health, social, and structural risk factors at the micro-, mezzo-, and macrolevels.
Problem-Based Learning: Exploiting Knowledge of How People Learn to Promote Effective Learning
ERIC Educational Resources Information Center
Wood, E. J.
2004-01-01
There is much information from educational psychology studies on how people learn. The thesis of this paper is that we should use this information to guide the ways in which we teach rather than blindly using our traditional methods. In this context, problem-based learning (PBL), as a method of teaching widely used in medical schools but…
An Overview of the Labor Market Problems of Indians and Native Americans. Research Report No. 89-02.
ERIC Educational Resources Information Center
Ainsworth, Robert G.
This booklet provides an overview of the labor market problems facing Indians and Native Americans, the most economically disadvantaged ethnic group in the United States. It summarizes Indian policy, particularly major policies and laws that relate to early trade restrictions and the exploitation of Indians through trade; their forced removal from…
Interactions of antiparasitic sterols with sterol 14α-demethylase (CYP51) of human pathogens.
Warfield, Jasmine; Setzer, William N; Ogungbe, Ifedayo Victor
2014-01-01
Sterol 14α-demethylase is a validated and an attractive drug target in human protozoan parasites. Pharmacological inactivation of this important enzyme has proven very effective against fungal infections, and it is a target that is being exploited for new antitrypanosomal and antileishmanial chemotherapy. We have used in silico calculations to identify previously reported antiparasitic sterol-like compounds and their structural congeners that have preferential and high docking affinity for CYP51. The sterol 14α-demethylase from Trypanosoma cruzi and Leishmania infantum, in particular, preferentially dock to taraxerol, epi-oleanolic acid, and α/β-amyrim structural scaffolds. These structural information and predicted interactions can be exploited for fragment/structure-based antiprotozoal drug design.
Spectrally-balanced chromatic approach-lighting system
NASA Technical Reports Server (NTRS)
Chase, W. D.
1977-01-01
Approach lighting system employing combinations of red and blue lights reduces problem of color-based optical illusions. System exploits inherent chromatic aberration of eye to create three-dimensional effect, giving pilot visual clues of position.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., transferred for sale, used or transferred for personal gain, or used or transferred for any commercial or fund... exploitation by the recipients, or create problems with respect to good taste; or that are large, bulky, or...
Code of Federal Regulations, 2010 CFR
2010-01-01
..., transferred for sale, used or transferred for personal gain, or used or transferred for any commercial or fund... exploitation by the recipients, or create problems with respect to good taste; or that are large, bulky, or...
Bashan, Anat; Yonath, Ada
2009-01-01
Crystallography of ribosomes, the universal cell nucleoprotein assemblies facilitating the translation of the genetic-code into proteins, met with severe problems owing to their large size, complex structure, inherent flexibility and high conformational variability. For the case of the small ribosomal subunit, which caused extreme difficulties, post crystallization treatment by minute amounts of a heteropolytungstate cluster allowed structure determination at atomic resolution. This cluster played a dual role in ribosomal crystallography: providing anomalous phasing power and dramatically increased the resolution, by stabilization of a selected functional conformation. Thus, four out of the fourteen clusters that bind to each of the crystallized small subunits are attached to a specific ribosomal protein in a fashion that may control a significant component of the subunit internal flexibility, by “gluing” symmetrical related subunits. Here we highlight basic issues in the relationship between metal ions and macromolecules and present common traits controlling in the interactions between polymetalates and various macromolecules, which may be extended towards the exploitation of polymetalates for therapeutical treatment. PMID:19915655
PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features
Zhao, Ji; Guo, Yue; He, Wenhao; Yuan, Kui
2018-01-01
To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only. PMID:29642648
The Structural Invisibility of Outsiders: The Role of Migrant Labour in the Meat-Processing Industry
Lever, John; Milbourne, Paul
2016-01-01
This article examines the role of migrant workers in meat-processing factories in the UK. Drawing on materials from mixed methods research in a number of case study towns across Wales, we explore the structural and spatial processes that position migrant workers as outsiders. While state policy and immigration controls are often presented as a way of protecting migrant workers from work-based exploitation and ensuring jobs for British workers, our research highlights that the situation ‘on the ground’ is more complex. We argue that ‘self-exploitation’ among the migrant workforce is linked to the strategies of employers and the organisation of work, and that hyper-flexible work patterns have reinforced the spatial and social invisibilities of migrant workers in this sector. While this creates problems for migrant workers, we conclude that it is beneficial to supermarkets looking to supply consumers with the regular supply of cheap food to which they have become accustomed. PMID:28490818
Morphew, Daniel; Shaw, James; Avins, Christopher; Chakrabarti, Dwaipayan
2018-03-27
Colloidal self-assembly is a promising bottom-up route to a wide variety of three-dimensional structures, from clusters to crystals. Programming hierarchical self-assembly of colloidal building blocks, which can give rise to structures ordered at multiple levels to rival biological complexity, poses a multiscale design problem. Here we explore a generic design principle that exploits a hierarchy of interaction strengths and employ this design principle in computer simulations to demonstrate the hierarchical self-assembly of triblock patchy colloidal particles into two distinct colloidal crystals. We obtain cubic diamond and body-centered cubic crystals via distinct clusters of uniform size and shape, namely, tetrahedra and octahedra, respectively. Such a conceptual design framework has the potential to reliably encode hierarchical self-assembly of colloidal particles into a high level of sophistication. Moreover, the design framework underpins a bottom-up route to cubic diamond colloidal crystals, which have remained elusive despite being much sought after for their attractive photonic applications.
Energy Reduction Effect of the South-to-North Water Diversion Project in China.
Zhao, Yong; Zhu, Yongnan; Lin, Zhaohui; Wang, Jianhua; He, Guohua; Li, Haihong; Li, Lei; Wang, Hao; Jiang, Shan; He, Fan; Zhai, Jiaqi; Wang, Lizhen; Wang, Qingming
2017-11-21
The North China Plain, with a population of approximately 150 million, is facing severe water scarcity. The over-exploitation of groundwater in the region, with accumulation amounts reaching more than 150 billion m 3 , causes a series of hydrological and geological problems together with the consumption of a significant amount of energy. Here, we highlight the energy and greenhouse gas-related environmental co-benefits of the South-to-North Water Diversion Project (SNWDP). Moreover, we evaluate the energy-saving effect of SNWDP on groundwater exploitation based on the groundwater-exploitation reduction program implemented by the Chinese government. Our results show that the transferred water will replace about 2.97 billion m 3 of exploited groundwater in the water reception area by 2020 and hence reduce energy consumption by 931 million kWh. Further, by 2030, 6.44 billion m 3 of groundwater, which accounts for 27% of the current groundwater withdrawal, will save approximately 7% of Beijing's current thermal power generation output.
Simultaneous gene finding in multiple genomes.
König, Stefanie; Romoth, Lars W; Gerischer, Lizzy; Stanke, Mario
2016-11-15
As the tree of life is populated with sequenced genomes ever more densely, the new challenge is the accurate and consistent annotation of entire clades of genomes. We address this problem with a new approach to comparative gene finding that takes a multiple genome alignment of closely related species and simultaneously predicts the location and structure of protein-coding genes in all input genomes, thereby exploiting negative selection and sequence conservation. The model prefers potential gene structures in the different genomes that are in agreement with each other, or-if not-where the exon gains and losses are plausible given the species tree. We formulate the multi-species gene finding problem as a binary labeling problem on a graph. The resulting optimization problem is NP hard, but can be efficiently approximated using a subgradient-based dual decomposition approach. The proposed method was tested on whole-genome alignments of 12 vertebrate and 12 Drosophila species. The accuracy was evaluated for human, mouse and Drosophila melanogaster and compared to competing methods. Results suggest that our method is well-suited for annotation of (a large number of) genomes of closely related species within a clade, in particular, when RNA-Seq data are available for many of the genomes. The transfer of existing annotations from one genome to another via the genome alignment is more accurate than previous approaches that are based on protein-spliced alignments, when the genomes are at close to medium distances. The method is implemented in C ++ as part of Augustus and available open source at http://bioinf.uni-greifswald.de/augustus/ CONTACT: stefaniekoenig@ymail.com or mario.stanke@uni-greifswald.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Graph Design via Convex Optimization: Online and Distributed Perspectives
NASA Astrophysics Data System (ADS)
Meng, De
Network and graph have long been natural abstraction of relations in a variety of applications, e.g. transportation, power system, social network, communication, electrical circuit, etc. As a large number of computation and optimization problems are naturally defined on graphs, graph structures not only enable important properties of these problems, but also leads to highly efficient distributed and online algorithms. For example, graph separability enables the parallelism for computation and operation as well as limits the size of local problems. More interestingly, graphs can be defined and constructed in order to take best advantage of those problem properties. This dissertation focuses on graph structure and design in newly proposed optimization problems, which establish a bridge between graph properties and optimization problem properties. We first study a new optimization problem called Geodesic Distance Maximization Problem (GDMP). Given a graph with fixed edge weights, finding the shortest path, also known as the geodesic, between two nodes is a well-studied network flow problem. We introduce the Geodesic Distance Maximization Problem (GDMP): the problem of finding the edge weights that maximize the length of the geodesic subject to convex constraints on the weights. We show that GDMP is a convex optimization problem for a wide class of flow costs, and provide a physical interpretation using the dual. We present applications of the GDMP in various fields, including optical lens design, network interdiction, and resource allocation in the control of forest fires. We develop an Alternating Direction Method of Multipliers (ADMM) by exploiting specific problem structures to solve large-scale GDMP, and demonstrate its effectiveness in numerical examples. We then turn our attention to distributed optimization on graph with only local communication. Distributed optimization arises in a variety of applications, e.g. distributed tracking and localization, estimation problems in sensor networks, multi-agent coordination. Distributed optimization aims to optimize a global objective function formed by summation of coupled local functions over a graph via only local communication and computation. We developed a weighted proximal ADMM for distributed optimization using graph structure. This fully distributed, single-loop algorithm allows simultaneous updates and can be viewed as a generalization of existing algorithms. More importantly, we achieve faster convergence by jointly designing graph weights and algorithm parameters. Finally, we propose a new problem on networks called Online Network Formation Problem: starting with a base graph and a set of candidate edges, at each round of the game, player one first chooses a candidate edge and reveals it to player two, then player two decides whether to accept it; player two can only accept limited number of edges and make online decisions with the goal to achieve the best properties of the synthesized network. The network properties considered include the number of spanning trees, algebraic connectivity and total effective resistance. These network formation games arise in a variety of cooperative multiagent systems. We propose a primal-dual algorithm framework for the general online network formation game, and analyze the algorithm performance by the competitive ratio and regret.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smart, Oliver S., E-mail: osmart@globalphasing.com; Womack, Thomas O.; Flensburg, Claus
2012-04-01
Local structural similarity restraints (LSSR) provide a novel method for exploiting NCS or structural similarity to an external target structure. Two examples are given where BUSTER re-refinement of PDB entries with LSSR produces marked improvements, enabling further structural features to be modelled. Maximum-likelihood X-ray macromolecular structure refinement in BUSTER has been extended with restraints facilitating the exploitation of structural similarity. The similarity can be between two or more chains within the structure being refined, thus favouring NCS, or to a distinct ‘target’ structure that remains fixed during refinement. The local structural similarity restraints (LSSR) approach considers all distances less thanmore » 5.5 Å between pairs of atoms in the chain to be restrained. For each, the difference from the distance between the corresponding atoms in the related chain is found. LSSR applies a restraint penalty on each difference. A functional form that reaches a plateau for large differences is used to avoid the restraints distorting parts of the structure that are not similar. Because LSSR are local, there is no need to separate out domains. Some restraint pruning is still necessary, but this has been automated. LSSR have been available to academic users of BUSTER since 2009 with the easy-to-use -autoncs and @@target target.pdb options. The use of LSSR is illustrated in the re-refinement of PDB entries http://scripts.iucr.org/cgi-bin/cr.cgi?rm, where -target enables the correct ligand-binding structure to be found, and http://scripts.iucr.org/cgi-bin/cr.cgi?rm, where -autoncs contributes to the location of an additional copy of the cyclic peptide ligand.« less
Shields, Ryan T; Letourneau, Elizabeth J
2015-03-01
Commercial sexual exploitation of children is an enduring social problem that has recently become the focus of numerous legislative initiatives. In particular, recent federal- and state-level legislation have sought to reclassify youth involved in commercial sexual exploitation as victims rather than as offenders. So-called Safe Harbor laws have been developed and centered on decriminalization of "juvenile prostitution." In addition to or instead of decriminalization, Safe Harbor policies also include diversion, law enforcement training, and increased penalties for adults seeking sexual contact with minors. The purpose of this paper is to review the underlying rationale of Safe Harbor laws, examine specific policy responses currently enacted by the states, and consider the effects of policy variations. Directions for future research and policy are addressed.
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang
2016-04-01
Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.
Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms
Helms, Lucas; Clune, Jeff
2017-01-01
Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding. PMID:28334002
Nanotechnology: From Science Fiction to Reality
NASA Technical Reports Server (NTRS)
Siochi, Mia
2016-01-01
Nanotechnology promises unconventional solutions to challenging problems because of expectations that matter can be manipulated at the atomic scale to yield properties that exceed those predicted for bulk materials. The excitement at this possibility has been fueled by significant investments in this technology area. This talk will focus on three examples of where advances are being made to exploit unique properties made possible by nanoscale features for aerospace applications. The first two topics will involve the development of carbon nanotubes for (a) lightweight structural applications and (b) net shape fabricated multifunctional components. The third topic will highlight lessons learned from the demonstration of the effect of nanoengineered surfaces on insect residue adhesion. In all three cases, the approaches used to mature these emerging technologies are based on the acceleration of technology development through multidisciplinary collaborations.
NASA Astrophysics Data System (ADS)
Adhikary, Ramkrishna; Bose, Sayantan; Casey, Thomas A.; Gapsch, Al; Rasmussen, Mark A.; Petrich, Jacob W.
2010-02-01
Applications of fluorescence spectroscopy that enable the real-time or rapid detection of fecal contamination on beef carcasses and the presence of central nervous system tissue in meat products are discussed. The former is achieved by employing spectroscopic signatures of chlorophyll metabolites; the latter, by exploiting the characteristic structure and intensity of lipofuscin in central nervous system tissue. The success of these techniques has led us to investigate the possibility of diagnosing scrapie in sheep by obtaining fluorescence spectra of the retina. Crucial to this diagnosis is the ability to obtain baseline correlations of lipofuscin fluorescence with age. A murine model was employed as a proof of principle of this correlation.
Exploiting Information Diffusion Feature for Link Prediction in Sina Weibo
NASA Astrophysics Data System (ADS)
Li, Dong; Zhang, Yongchao; Xu, Zhiming; Chu, Dianhui; Li, Sheng
2016-01-01
The rapid development of online social networks (e.g., Twitter and Facebook) has promoted research related to social networks in which link prediction is a key problem. Although numerous attempts have been made for link prediction based on network structure, node attribute and so on, few of the current studies have considered the impact of information diffusion on link creation and prediction. This paper mainly addresses Sina Weibo, which is the largest microblog platform with Chinese characteristics, and proposes the hypothesis that information diffusion influences link creation and verifies the hypothesis based on real data analysis. We also detect an important feature from the information diffusion process, which is used to promote link prediction performance. Finally, the experimental results on Sina Weibo dataset have demonstrated the effectiveness of our methods.
Exploiting Information Diffusion Feature for Link Prediction in Sina Weibo.
Li, Dong; Zhang, Yongchao; Xu, Zhiming; Chu, Dianhui; Li, Sheng
2016-01-28
The rapid development of online social networks (e.g., Twitter and Facebook) has promoted research related to social networks in which link prediction is a key problem. Although numerous attempts have been made for link prediction based on network structure, node attribute and so on, few of the current studies have considered the impact of information diffusion on link creation and prediction. This paper mainly addresses Sina Weibo, which is the largest microblog platform with Chinese characteristics, and proposes the hypothesis that information diffusion influences link creation and verifies the hypothesis based on real data analysis. We also detect an important feature from the information diffusion process, which is used to promote link prediction performance. Finally, the experimental results on Sina Weibo dataset have demonstrated the effectiveness of our methods.
Study of genetic direct search algorithms for function optimization
NASA Technical Reports Server (NTRS)
Zeigler, B. P.
1974-01-01
The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.
An Adaptive Immune Genetic Algorithm for Edge Detection
NASA Astrophysics Data System (ADS)
Li, Ying; Bai, Bendu; Zhang, Yanning
An adaptive immune genetic algorithm (AIGA) based on cost minimization technique method for edge detection is proposed. The proposed AIGA recommends the use of adaptive probabilities of crossover, mutation and immune operation, and a geometric annealing schedule in immune operator to realize the twin goals of maintaining diversity in the population and sustaining the fast convergence rate in solving the complex problems such as edge detection. Furthermore, AIGA can effectively exploit some prior knowledge and information of the local edge structure in the edge image to make vaccines, which results in much better local search ability of AIGA than that of the canonical genetic algorithm. Experimental results on gray-scale images show the proposed algorithm perform well in terms of quality of the final edge image, rate of convergence and robustness to noise.
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
NASA Astrophysics Data System (ADS)
Miehe, Christian; Mauthe, Steffen; Teichtmeister, Stephan
2015-09-01
This work develops new minimization and saddle point principles for the coupled problem of Darcy-Biot-type fluid transport in porous media at fracture. It shows that the quasi-static problem of elastically deforming, fluid-saturated porous media is related to a minimization principle for the evolution problem. This two-field principle determines the rate of deformation and the fluid mass flux vector. It provides a canonically compact model structure, where the stress equilibrium and the inverse Darcy's law appear as the Euler equations of a variational statement. A Legendre transformation of the dissipation potential relates the minimization principle to a characteristic three field saddle point principle, whose Euler equations determine the evolutions of deformation and fluid content as well as Darcy's law. A further geometric assumption results in modified variational principles for a simplified theory, where the fluid content is linked to the volumetric deformation. The existence of these variational principles underlines inherent symmetries of Darcy-Biot theories of porous media. This can be exploited in the numerical implementation by the construction of time- and space-discrete variational principles, which fully determine the update problems of typical time stepping schemes. Here, the proposed minimization principle for the coupled problem is advantageous with regard to a new unconstrained stable finite element design, while space discretizations of the saddle point principles are constrained by the LBB condition. The variational principles developed provide the most fundamental approach to the discretization of nonlinear fluid-structure interactions, showing symmetric systems in algebraic update procedures. They also provide an excellent starting point for extensions towards more complex problems. This is demonstrated by developing a minimization principle for a phase field description of fracture in fluid-saturated porous media. It is designed for an incorporation of alternative crack driving forces, such as a convenient criterion in terms of the effective stress. The proposed setting provides a modeling framework for the analysis of complex problems such as hydraulic fracture. This is demonstrated by a spectrum of model simulations.
The name of the game: a review of sexual exploitation of females in sport.
Bringer, J D; Brackenridge, C H; Johnston, L H
2001-12-01
Sexual harassment and abuse has been a recognized problem in the workplace, schools, and residential homes for more than three decades. Many professional policies highlight the potential for abusing positions of trust, and therefore forbid intimate relationships between, for example, doctors and patients, and psychologists and clients. Yet, abuse of power in the coach-athlete relationship has only recently been acknowledged. This paper discusses definitions of sexual exploitation, prevalence figures, methods used for entrapping athletes, potential risk factors, signs of abuse and harassment, and the consequences for survivors.
Construct exploit constraint in crash analysis by bypassing canary
NASA Astrophysics Data System (ADS)
Huang, Ning; Huang, Shuguang; Huang, Hui; Chang, Chao
2017-08-01
Selective symbolic execution is a common program testing technology. Developed on the basis of it, some crash analysis systems are often used to test the fragility of the program by constructing exploit constraints, such as CRAX. From the study of crash analysis based on symbolic execution, this paper find that this technology cannot bypass the canary stack protection mechanisms. This paper makes the improvement uses the API hook in Linux. Experimental results show that the use of API hook can effectively solve the problem that crash analysis cannot bypass the canary protection.
Multi-Criteria Approach in Multifunctional Building Design Process
NASA Astrophysics Data System (ADS)
Gerigk, Mateusz
2017-10-01
The paper presents new approach in multifunctional building design process. Publication defines problems related to the design of complex multifunctional buildings. Currently, contemporary urban areas are characterized by very intensive use of space. Today, buildings are being built bigger and contain more diverse functions to meet the needs of a large number of users in one capacity. The trends show the need for recognition of design objects in an organized structure, which must meet current design criteria. The design process in terms of the complex system is a theoretical model, which is the basis for optimization solutions for the entire life cycle of the building. From the concept phase through exploitation phase to disposal phase multipurpose spaces should guarantee aesthetics, functionality, system efficiency, system safety and environmental protection in the best possible way. The result of the analysis of the design process is presented as a theoretical model of the multifunctional structure. Recognition of multi-criteria model in the form of Cartesian product allows to create a holistic representation of the designed building in the form of a graph model. The proposed network is the theoretical base that can be used in the design process of complex engineering systems. The systematic multi-criteria approach makes possible to maintain control over the entire design process and to provide the best possible performance. With respect to current design requirements, there are no established design rules for multifunctional buildings in relation to their operating phase. Enrichment of the basic criteria with functional flexibility criterion makes it possible to extend the exploitation phase which brings advantages on many levels.
Martínez, Sergio; Sánchez, David; Valls, Aida
2013-04-01
Structured patient data like Electronic Health Records (EHRs) are a valuable source for clinical research. However, the sensitive nature of such information requires some anonymisation procedure to be applied before releasing the data to third parties. Several studies have shown that the removal of identifying attributes, like the Social Security Number, is not enough to obtain an anonymous data file, since unique combinations of other attributes as for example, rare diagnoses and personalised treatments, may lead to patient's identity disclosure. To tackle this problem, Statistical Disclosure Control (SDC) methods have been proposed to mask sensitive attributes while preserving, up to a certain degree, the utility of anonymised data. Most of these methods focus on continuous-scale numerical data. Considering that part of the clinical data found in EHRs is expressed with non-numerical attributes as for example, diagnoses, symptoms, procedures, etc., their application to EHRs produces far from optimal results. In this paper, we propose a general framework to enable the accurate application of SDC methods to non-numerical clinical data, with a focus on the preservation of semantics. To do so, we exploit structured medical knowledge bases like SNOMED CT to propose semantically-grounded operators to compare, aggregate and sort non-numerical terms. Our framework has been applied to several well-known SDC methods and evaluated using a real clinical dataset with non-numerical attributes. Results show that the exploitation of medical semantics produces anonymised datasets that better preserve the utility of EHRs. Copyright © 2012 Elsevier Inc. All rights reserved.
Ng, Soon Hwee; Shankar, Shruti; Shikichi, Yasumasa; Akasaka, Kazuaki; Mori, Kenji; Yew, Joanne Y
2014-02-25
Animals exhibit a spectacular array of traits to attract mates. Understanding the evolutionary origins of sexual features and preferences is a fundamental problem in evolutionary biology, and the mechanisms remain highly controversial. In some species, females choose mates based on direct benefits conferred by the male to the female and her offspring. Thus, female preferences are thought to originate and coevolve with male traits. In contrast, sensory exploitation occurs when expression of a male trait takes advantage of preexisting sensory biases in females. Here, we document in Drosophila a previously unidentified example of sensory exploitation of males by other males through the use of the sex pheromone CH503. We use mass spectrometry, high-performance liquid chromatography, and behavioral analysis to demonstrate that an antiaphrodisiac produced by males of the melanogaster subgroup also is effective in distant Drosophila relatives that do not express the pheromone. We further show that species that produce the pheromone have become less sensitive to the compound, illustrating that sensory adaptation occurs after sensory exploitation. Our findings provide a mechanism for the origin of a sex pheromone and show that sensory exploitation changes male sexual behavior over evolutionary time.
The Problem of Multiple Criteria Selection of the Surface Mining Haul Trucks
NASA Astrophysics Data System (ADS)
Bodziony, Przemysław; Kasztelewicz, Zbigniew; Sawicki, Piotr
2016-06-01
Vehicle transport is a dominant type of technological processes in rock mines, and its profit ability is strictly dependent on overall cost of its exploitation, especially on diesel oil consumption. Thus, a rational design of transportation system based on haul trucks should result from thorough analysis of technical and economic issues, including both cost of purchase and its further exploitation, having a crucial impact on the cost of minerals extraction. Moreover, off-highway trucks should be selected with respect to all specific exploitation conditions and even the user's preferences and experience. In this paper a development of universal family of evaluation criteria as well as application of evaluation method for haul truck selection process for a specific exploitation conditions in surface mining have been carried out. The methodology presented in the paper is based on the principles of multiple criteria decision aiding (MCDA) using one of the ranking method, i.e. ELECTRE III. The applied methodology has been allowed for ranking of alternative solution (variants), on the considered set of haul trucks. The result of the research is a universal methodology, and it consequently may be applied in other surface mines with similar exploitation parametres.
Data mining tools for Sentinel 1 and Sentinel 2 data exploitation
NASA Astrophysics Data System (ADS)
Espinoza Molina, Daniela; Datcu, Mihai
2016-10-01
With the new planned Sentinel missions, the availability of Earth Observation data is increasing everyday offering a larger number of applications that can be created using these data. Currently, three of the five missions were launched and they are delivering a wealth of data and imagery of the Earth's surface as, for example, the Sentinel-1 carries an advanced radar instrument to provide an all-weather, day-and-night supply of Earth imagery. The second mission, the Sentinel-2, carries an optical instrument payload that will sample 13 spectral bands at different resolutions. Even though, we count on tools for automated loading and visual exploration of the Sentinel data, we still face the problem of extracting relevant structures from the images, finding similar patterns in a scene, exploiting the data, and creating final user applications based on these processed data. In this paper, we present our approach for processing radar and multi-spectral Sentinel data. Our approach is mainly composed of three steps: 1) the generation of a data model that explains the information contained in a Sentinel product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback methods.
General health status of residents of the Selebi Phikwe Ni-Cu mine area, Botswana.
Ekosse, Georges
2005-10-01
Residents of the Selebi Phikwe area, Botswana where nickel-copper (Ni-Cu) is being exploited often exhibit symptoms of varied degrees of ailments, sicknesses and diseases. A need to investigate their general health status was therefore eminent. Primary data was obtained by means of a questionnaire and structured interviews conducted with individuals, health service providers, business enterprises and educational Institutions. The generated data revealed common ailments, sicknesses and diseases in the area with the four most frequent health complaints being frequent coughing headaches, influenza/common colds and rampant chest pains. Research findings indicated that residents had respiratory tract-related problems, suspected to be linked to the effects of air pollution caused by the emission of sulphur dioxide (SO2) from mining and smelting activities. Residents were frequently in contact with SO2 and related gases and fumes, mineral and silica dust generated from the mining processes. No clearly demarcating differences were noticed in the health status of residents living in the control site from those in the main study area. However, sites most affected were those close to where Ni-Cu is exploited. Environmental factors resulting from mining and smelting activities, among others, could be contributory to the negative health effects occurring at Selebi Phikwe.
Electronic Band Structure of Helical Polyisocyanides.
Champagne, Benoît; Liégeois, Vincent; Fripiat, Joseph G; Harris, Frank E
2017-10-19
Restricted Hartree-Fock computations are reported for a methyl isocyanide polymer (repeating unit -C═N-CH 3 ), whose most stable conformation is expected to be a helical chain. The computations used a standard contracted Gaussian orbital set at the computational levels STO-3G, 3-21G, 6-31G, and 6-31G**, and studies were made for two line-group configurations motivated by earlier work and by studies of space-filling molecular models: (1) A structure of line-group symmetry L9 5 , containing a 9-fold screw axis with atoms displaced in the axial direction by 5/9 times the lattice constant, and (2) a structure of symmetry L4 1 that had been proposed, containing a 4-fold screw axis with translation by 1/4 of the lattice constant. Full use of the line-group symmetry was employed to cause most of the computational complexity to depend only on the size of the asymmetric repeating unit. Data reported include computed bond properties, atomic charge distribution, longitudinal polarizability, band structure, and the convoluted density of states. Most features of the description were found to be insensitive to the level of computational approximation. The work also illustrates the importance of exploiting line-group symmetry to extend the range of polymer structural problems that can be treated computationally.
NASA Astrophysics Data System (ADS)
Royer, J. J.; Filippov, L. O.
2017-07-01
This work aims at improving the exploitation of the K, Mg, salts ore of the Verkhnekamskoye deposit using advanced information technology (IT) such as 3D geostatistical modeling techniques together with high performance flotation. It is expected to provide a more profitable exploitation of the actual deposit avoiding the formation of dramatic sinkholes by a better knowledge of the deposit. The GeoChron modelling method for sedimentary formations (Mallet, 2014) was used to improve the knowledge of the Verkhnekamskoye potash deposit, Perm region, Russia. After a short introduction on the modern theory of mathematical modelling applied to mineral resources exploitation and geology, new results are presented on the sedimentary architecture of the ore deposit. They enlighten the structural geology and the fault orientations, a key point for avoiding catastrophic water inflows recharging zone during exploitation. These results are important for avoiding catastrophic sinkholes during exploitation.
ERIC Educational Resources Information Center
Fokides, Emmanuel
2016-01-01
Immigrant students face a multitude of problems, among which are poor social adaptation and school integration. On the other hand, although digital narrations are widely used in education, they are rarely used for aiding students or for the resolution of complex problems. This study exploits the potential of digital narrations towards this end, by…
NASA Astrophysics Data System (ADS)
Grünbaum, F. A.; Pacharoni, I.; Zurrián, I.
2017-02-01
The problem of recovering a signal of finite duration from a piece of its Fourier transform was solved at Bell Labs in the 1960’s, by exploiting a ‘miracle’: a certain naturally appearing integral operator commutes with an explicit differential one. Here we show that this same miracle holds in a matrix valued version of the same problem.
Geothermal reservoir simulation
NASA Technical Reports Server (NTRS)
Mercer, J. W., Jr.; Faust, C.; Pinder, G. F.
1974-01-01
The prediction of long-term geothermal reservoir performance and the environmental impact of exploiting this resource are two important problems associated with the utilization of geothermal energy for power production. Our research effort addresses these problems through numerical simulation. Computer codes based on the solution of partial-differential equations using finite-element techniques are being prepared to simulate multiphase energy transport, energy transport in fractured porous reservoirs, well bore phenomena, and subsidence.
An ontology-driven tool for structured data acquisition using Web forms.
Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A
2017-08-01
Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.
Demographic threats to the sustainability of Brazil nut exploitation.
Peres, Carlos A; Baider, Claudia; Zuidema, Pieter A; Wadt, Lúcia H O; Kainer, Karen A; Gomes-Silva, Daisy A P; Salomão, Rafael P; Simões, Luciana L; Franciosi, Eduardo R N; Cornejo Valverde, Fernando; Gribel, Rogério; Shepard, Glenn H; Kanashiro, Milton; Coventry, Peter; Yu, Douglas W; Watkinson, Andrew R; Freckleton, Robert P
2003-12-19
A comparative analysis of 23 populations of the Brazil nut tree (Bertholletia excelsa) across the Brazilian, Peruvian, and Bolivian Amazon shows that the history and intensity of Brazil nut exploitation are major determinants of population size structure. Populations subjected to persistent levels of harvest lack juvenile trees less than 60 centimeters in diameter at breast height; only populations with a history of either light or recent exploitation contain large numbers of juvenile trees. A harvesting model confirms that intensive exploitation levels over the past century are such that juvenile recruitment is insufficient to maintain populations over the long term. Without management, intensively harvested populations will succumb to a process of senescence and demographic collapse, threatening this cornerstone of the Amazonian extractive economy.
Child sex trafficking and commercial sexual exploitation: health care needs of victims.
Greenbaum, Jordan; Crawford-Jakubiak, James E
2015-03-01
Child sex trafficking and commercial sexual exploitation of children (CSEC) are major public health problems in the United States and throughout the world. Despite large numbers of American and foreign youth affected and a plethora of serious physical and mental health problems associated with CSEC, there is limited information available to pediatricians regarding the nature and scope of human trafficking and how pediatricians and other health care providers may help protect children. Knowledge of risk factors, recruitment practices, possible indicators of CSEC, and common medical and behavioral health problems experienced by victims will help pediatricians recognize potential victims and respond appropriately. As health care providers, educators, and leaders in child advocacy, pediatricians play an essential role in addressing the public health issues faced by child victims of CSEC. Their roles can include working to increase recognition of CSEC, providing direct care and anticipatory guidance related to CSEC, engaging in collaborative efforts with medical and nonmedical colleagues to provide for the complex needs of youth, and educating child-serving professionals and the public. Copyright © 2015 by the American Academy of Pediatrics.
The Power of Implicit Social Relation in Rating Prediction of Social Recommender Systems
Reafee, Waleed; Salim, Naomie; Khan, Atif
2016-01-01
The explosive growth of social networks in recent times has presented a powerful source of information to be utilized as an extra source for assisting in the social recommendation problems. The social recommendation methods that are based on probabilistic matrix factorization improved the recommendation accuracy and partly solved the cold-start and data sparsity problems. However, these methods only exploited the explicit social relations and almost completely ignored the implicit social relations. In this article, we firstly propose an algorithm to extract the implicit relation in the undirected graphs of social networks by exploiting the link prediction techniques. Furthermore, we propose a new probabilistic matrix factorization method to alleviate the data sparsity problem through incorporating explicit friendship and implicit friendship. We evaluate our proposed approach on two real datasets, Last.Fm and Douban. The experimental results show that our method performs much better than the state-of-the-art approaches, which indicates the importance of incorporating implicit social relations in the recommendation process to address the poor prediction accuracy. PMID:27152663
Indoor detection of passive targets recast as an inverse scattering problem
NASA Astrophysics Data System (ADS)
Gottardi, G.; Moriyama, T.
2017-10-01
The wireless local area networks represent an alternative to custom sensors and dedicated surveillance systems for target indoor detection. The availability of the channel state information has opened the exploitation of the spatial and frequency diversity given by the orthogonal frequency division multiplexing. Such a fine-grained information can be used to solve the detection problem as an inverse scattering problem. The goal of the detection is to reconstruct the properties of the investigation domain, namely to estimate if the domain is empty or occupied by targets, starting from the measurement of the electromagnetic perturbation of the wireless channel. An innovative inversion strategy exploiting both the frequency and the spatial diversity of the channel state information is proposed. The target-dependent features are identified combining the Kruskal-Wallis test and the principal component analysis. The experimental validation points out the detection performance of the proposed method when applied to an existing wireless link of a WiFi architecture deployed in a real indoor scenario. False detection rates lower than 2 [%] have been obtained.
NASA Astrophysics Data System (ADS)
Zhou, Pengpeng; Li, Ming; Lu, Yaodong
2017-10-01
Assessing sustainability of coastal groundwater is significant for groundwater management as coastal groundwater is vulnerable to over-exploitation and contamination. To address the issues of serious groundwater level drawdown and potential seawater intrusion risk of a multi-layered coastal aquifer system in Zhanjiang, China, this paper presents a numerical modelling study to research groundwater sustainability of this aquifer system. The transient modelling results show that the groundwater budget was negative (-3826× 104 to -4502× 10^{4 } m3/a) during the years 2008-2011, revealing that this aquifer system was over-exploited. Meanwhile, the groundwater sustainability was assessed by evaluating the negative hydraulic pressure area (NHPA) of the unconfined aquifer and the groundwater level dynamic and flow velocity of the offshore boundaries of the confined aquifers. The results demonstrate that the Nansan Island is most influenced by NHPA and that the local groundwater should not be exploited. The results also suggest that, with the current groundwater exploitation scheme, the sustainable yield should be 1.784× 108 m3/a (i.e., decreased by 20% from the current exploitation amount). To satisfy public water demands, the 20% decrease of the exploitation amount can be offset by the groundwater sourced from the Taiping groundwater resource field. These results provide valuable guidance for groundwater management of Zhanjiang.
NASA Astrophysics Data System (ADS)
Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.
2017-04-01
This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.
Manifold regularized matrix completion for multi-label learning with ADMM.
Liu, Bin; Li, Yingming; Xu, Zenglin
2018-05-01
Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems
NASA Astrophysics Data System (ADS)
Abtahi, Amir-Reza; Bijari, Afsane
2017-03-01
In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.
Millimetre-Wave Backhaul for 5G Networks: Challenges and Solutions.
Feng, Wei; Li, Yong; Jin, Depeng; Su, Li; Chen, Sheng
2016-06-16
The trend for dense deployment in future 5G mobile communication networks makes current wired backhaul infeasible owing to the high cost. Millimetre-wave (mm-wave) communication, a promising technique with the capability of providing a multi-gigabit transmission rate, offers a flexible and cost-effective candidate for 5G backhauling. By exploiting highly directional antennas, it becomes practical to cope with explosive traffic demands and to deal with interference problems. Several advancements in physical layer technology, such as hybrid beamforming and full duplexing, bring new challenges and opportunities for mm-wave backhaul. This article introduces a design framework for 5G mm-wave backhaul, including routing, spatial reuse scheduling and physical layer techniques. The associated optimization model, open problems and potential solutions are discussed to fully exploit the throughput gain of the backhaul network. Extensive simulations are conducted to verify the potential benefits of the proposed method for the 5G mm-wave backhaul design.
Protocol to Exploit Waiting Resources for UASNs.
Hung, Li-Ling; Luo, Yung-Jeng
2016-03-08
The transmission speed of acoustic waves in water is much slower than that of radio waves in terrestrial wireless sensor networks. Thus, the propagation delay in underwater acoustic sensor networks (UASN) is much greater. Longer propagation delay leads to complicated communication and collision problems. To solve collision problems, some studies have proposed waiting mechanisms; however, long waiting mechanisms result in low bandwidth utilization. To improve throughput, this study proposes a slotted medium access control protocol to enhance bandwidth utilization in UASNs. The proposed mechanism increases communication by exploiting temporal and spatial resources that are typically idle in order to protect communication against interference. By reducing wait time, network performance and energy consumption can be improved. A performance evaluation demonstrates that when the data packets are large or sensor deployment is dense, the energy consumption of proposed protocol is less than that of existing protocols as well as the throughput is higher than that of existing protocols.
NASA Astrophysics Data System (ADS)
He, Xingyu; Tong, Ningning; Hu, Xiaowei
2018-01-01
Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Blackmore, Lars; Wolf, Michael; Fathpour, Nanaz; Newman, Claire; Elfes, Alberto
2009-01-01
Hot air (Montgolfiere) balloons represent a promising vehicle system for possible future exploration of planets and moons with thick atmospheres such as Venus and Titan. To go to a desired location, this vehicle can primarily use the horizontal wind that varies with altitude, with a small help of its own actuation. A main challenge is how to plan such trajectory in a highly nonlinear and time-varying wind field. This paper poses this trajectory planning as a graph search on the space-time grid and addresses its computational aspects. When capturing various time scales involved in the wind field over the duration of long exploration mission, the size of the graph becomes excessively large. We show that the adjacency matrix of the graph is block-triangular, and by exploiting this structure, we decompose the large planning problem into several smaller subproblems, whose memory requirement stays almost constant as the problem size grows. The approach is demonstrated on a global reachability analysis of a possible Titan mission scenario.
Novelty and Inductive Generalization in Human Reinforcement Learning.
Gershman, Samuel J; Niv, Yael
2015-07-01
In reinforcement learning (RL), a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and we describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of RL in humans and animals. According to our view, the search for the best option is guided by abstract knowledge about the relationships between different options in an environment, resulting in greater search efficiency compared to traditional RL algorithms previously applied to human cognition. In two behavioral experiments, we test several predictions of our model, providing evidence that humans learn and exploit structured inductive knowledge to make predictions about novel options. In light of this model, we suggest a new interpretation of dopaminergic responses to novelty. Copyright © 2015 Cognitive Science Society, Inc.
Novelty and Inductive Generalization in Human Reinforcement Learning
Gershman, Samuel J.; Niv, Yael
2015-01-01
In reinforcement learning, a decision maker searching for the most rewarding option is often faced with the question: what is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: how can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and describe an equivalence between the Bayesian model and temporal difference learning algorithms that have been proposed as models of reinforcement learning in humans and animals. According to our view, the search for the best option is guided by abstract knowledge about the relationships between different options in an environment, resulting in greater search efficiency compared to traditional reinforcement learning algorithms previously applied to human cognition. In two behavioral experiments, we test several predictions of our model, providing evidence that humans learn and exploit structured inductive knowledge to make predictions about novel options. In light of this model, we suggest a new interpretation of dopaminergic responses to novelty. PMID:25808176
High-Frequency Subband Compressed Sensing MRI Using Quadruplet Sampling
Sung, Kyunghyun; Hargreaves, Brian A
2013-01-01
Purpose To presents and validates a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. Theory and Methods High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing (CS) problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, CS can be used for high-spatial-frequency regions while parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for CS and regular undersampling for parallel imaging. Results Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution 3D breast imaging with a net acceleration of 11 to 12. Conclusion The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. PMID:23280540
Offspring Generation Method for interactive Genetic Algorithm considering Multimodal Preference
NASA Astrophysics Data System (ADS)
Ito, Fuyuko; Hiroyasu, Tomoyuki; Miki, Mitsunori; Yokouchi, Hisatake
In interactive genetic algorithms (iGAs), computer simulations prepare design candidates that are then evaluated by the user. Therefore, iGA can predict a user's preferences. Conventional iGA problems involve a search for a single optimum solution, and iGA were developed to find this single optimum. On the other hand, our target problems have several peaks in a function and there are small differences among these peaks. For such problems, it is better to show all the peaks to the user. Product recommendation in shopping sites on the web is one example of such problems. Several types of preference trend should be prepared for users in shopping sites. Exploitation and exploration are important mechanisms in GA search. To perform effective exploitation, the offspring generation method (crossover) is very important. Here, we introduced a new offspring generation method for iGA in multimodal problems. In the proposed method, individuals are clustered into subgroups and offspring are generated in each group. The proposed method was applied to an experimental iGA system to examine its effectiveness. In the experimental iGA system, users can decide on preferable t-shirts to buy. The results of the subjective experiment confirmed that the proposed method enables offspring generation with consideration of multimodal preferences, and the proposed mechanism was also shown not to adversely affect the performance of preference prediction.
Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
NASA Astrophysics Data System (ADS)
Reed, P. M.
2012-12-01
Climate change, population demands, and evolving land-use represent strong risks to the sustainable development and stability of world-wide urban water supplies. There is a growing consensus that non-structural supply management instruments such as water markets have significant potential to reduce the risks and vulnerabilities in complex urban water systems. This paper asks a common question, what are the tradeoffs for a city using water market supply instruments?. This question emerges quickly in policy and management, but its answer is deceptively difficult to attain using traditional planning tools and management frameworks. This research demonstrates new frameworks that facilitate rapid evaluation of hypotheses on the reliability, resiliency, adaptability, and cost-effectiveness of urban water supply systems. This study considers a broader exploration of the issues of "nonstationarity" and "uncertainty" in urban water planning. As we invest in new information and prediction frameworks for the coupled human-natural systems that define our water, our problem definitions (i.e., objectives, constraints, preferences, and hypotheses) themselves evolve. From a formal mathematical perspective, this means that our management problems are structurally uncertain and nonstationary (i.e., the definition of optimality changes across regions, times, and stakeholders). This uncertainty and nonstationarity in our problem definitions needs to be more explicitly acknowledged in adaptive management and integrated water resources management. This study demonstrates the potential benefits of exploring these issues in the context of a city in the Lower Rio Grande Valley (LRGV) of Texas, USA determining how to use its regional water market to manage population and drought risks.
Optimal perturbations for nonlinear systems using graph-based optimal transport
NASA Astrophysics Data System (ADS)
Grover, Piyush; Elamvazhuthi, Karthik
2018-06-01
We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.
Biotemplated materials for sustainable energy and environment: current status and challenges.
Zhou, Han; Fan, Tongxiang; Zhang, Di
2011-10-17
Materials science will play a key role in the further development of emerging solutions for the increasing problems of energy and environment. Materials found in nature have many inspiring structures, such as hierarchical organizations, periodic architectures, or nanostructures, that endow them with amazing functions, such as energy harvesting and conversion, antireflection, structural coloration, superhydrophobicity, and biological self-assembly. Biotemplating is an effective strategy to obtain morphology-controllable materials with structural specificity, complexity, and related unique functions. Herein, we highlight the synthesis and application of biotemplated materials for six key areas of energy and environment technologies, namely, photocatalytic hydrogen evolution, CO(2) reduction, solar cells, lithium-ion batteries, photocatalytic degradation, and gas/vapor sensing. Although the applications differ from each other, a common fundamental challenge is to realize optimum structures for improved performances. We highlight the role of four typical structures derived from biological systems exploited to optimize properties: hierarchical (porous) structures, periodic (porous) structures, hollow structures, and nanostructures. We also provide examples of using biogenic elements (e.g., C, Si, N, I, P, S) for the creation of active materials. Finally, we disscuss the challenges of achieving the desired performance for large-scale commercial applications and provide some useful prototypes from nature for the biomimetic design of new materials or systems. The emphasis is mainly focused on the structural effects and compositional utilization of biotemplated materials. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reliability and safety, and the risk of construction damage in mining areas
NASA Astrophysics Data System (ADS)
Skrzypczak, Izabela; Kogut, Janusz P.; Kokoszka, Wanda; Oleniacz, Grzegorz
2018-04-01
This article concerns the reliability and safety of building structures in mining areas, with a particular emphasis on the quantitative risk analysis of buildings. The issues of threat assessment and risk estimation, in the design of facilities in mining exploitation areas, are presented here, indicating the difficulties and ambiguities associated with their quantification and quantitative analysis. This article presents the concept of quantitative risk assessment of the impact of mining exploitation, in accordance with ISO 13824 [1]. The risk analysis is illustrated through an example of a construction located within an area affected by mining exploitation.
"Soft Technology" and Criticism of the Western Model of Development
ERIC Educational Resources Information Center
Harper, Peter
1973-01-01
Alternatives to the capitalistic Western model of develoment are suggested. Three problems afflicting Western society--alienation, resource exploitation, and eviornmental stability--are discussed and a model which advocates both political and technological change is proposed. (SM)
Rethinking Intelligence to Integrate Counterterrorism into the Local Law Enforcement Mission
2007-03-01
a needle in the haystack problem. Also referred to as the wheat versus the chaff problem, valuable information must be separated from unimportant...information and processed before analysts can yield any useful intelligence.25 3. Processing and Exploitation To address the wheat -versus-chaff...93 Despite the perception that Chicago is an aging Rust Belt city, some experts report that it has the largest high technology and information
ERIC Educational Resources Information Center
Sternberg, Kathleen J.; Baradaran, Laila P.; Abbott, Craig B.; Lamb, Michael E.; Guterman, Eva
2006-01-01
A mega-analytic study was designed to exploit the power of a large data set combining raw data from multiple studies (n=1870) to examine the effects of type of family violence, age, and gender on children's behavior problems assessed using the Child Behavior Checklist (CBCL). Our findings confirmed that children who experienced multiple forms of…
Reliable vision-guided grasping
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
Measuring flows in the solar interior: current developments, results, and outstanding problems
NASA Astrophysics Data System (ADS)
Schad, Ariane
2016-10-01
I will present an overview of the current developments to determine flows in the solar interior and recent results from helioseismology. I will lay special focus on the inference of the deep structure of the meridional flow, which is one of the most challenging problems in helioseismology. In recent times, promising approaches have been developed for solving this problem. The time-distance analysis made large improvements in this after becoming aware of and compensating for a systematic effect in the analysis, the origin of which is not clear yet. In addition to this, a different approach is now available, which directly exploits the distortion of mode eigenfunctions by the meridional flow as well as rotation. These methods have presented us partly surprisingly complex meridional flow patterns, which, however, do not provide a consistent picture of the flow. Resolving this puzzle is part of current research since this has important consequences on our understanding of the solar dynamo. Another interesting discrepancy was found in recent studies between the amplitudes of the large- and small-scale dynamics in the convection zone estimated from helioseismology and those predicted from theoretical models. This raises fundamental questions how the Sun, and in general a star, maintains its heat transport and redistributes its angular momentum that lead, e.g., to the observed differential rotation.
Convex Formulations of Learning from Crowds
NASA Astrophysics Data System (ADS)
Kajino, Hiroshi; Kashima, Hisashi
It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.
Enhancing instruction scheduling with a block-structured ISA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melvin, S.; Patt, Y.
It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less
Topological Structures in Multiferroics - Domain Walls, Skyrmions and Vortices
Seidel, Jan; Vasudevan, Rama K.; Valanoor, Nagarajan
2015-12-15
Topological structures in multiferroic materials have recently received considerable attention because of their potential use as nanoscale functional elements. Their reduced size in conjunction with exotic arrangement of the ferroic order parameter and potential order parameter coupling allows for emergent and unexplored phenomena in condensed matter and functional materials systems. This will lead to exciting new fundamental discoveries as well as application concepts that exploit their response to external stimuli such as mechanical strain, electric and magnetic fields. In this review we capture the current development of this rapidly moving field with specific emphasis on key achievements that have castmore » light on how such topological structures in multiferroic materials systems can be exploited for use in complex oxide nanoelectronics and spintronics.« less
Montanino, A; Fortunato, A; Angelillo, M
2016-07-01
In this paper, we study the fluid-structure interaction in a weakened basilar artery. The aim is to study how the wall shear stress changes in space and time because of the weakening, because spatial and temporal changes are thought to be possible causes of aneurysm and vascular deseases. The arterial wall, in its natural configuration, is modeled as a hyperelastic cylinder, inhomogeneous along its axis, in order to simulate the axis-symmetric weakening. The fluid is studied exploiting a recent approach for quasi-one-dimensional flows in slowly varying ducts, which allows to write the averaged equations of mass and energy balance on the basis of the velocity profile in a straight duct. The unknowns are the wall pressure, the average velocity, and the wall radial displacement. The problem is solved in two parts: first, the stationary non-linear coupled problem is solved, and an intermediate configuration is obtained. Then, we study the variation of the basic unknowns about the intermediate configuration, considering time dependence over the cardiac cycles. The results suggest that, with a 10% reduction of the main elastic modulus, the shear stress in the weakened zone changes its sign and doubles the maximum stress value detected in the healthy zone. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
Labour exploitation and health: a case series of men and women seeking post-trafficking services.
Turner-Moss, Eleanor; Zimmerman, Cathy; Howard, Louise M; Oram, Siân
2014-06-01
Research on the health of trafficked men and on the health problems associated with trafficking for labor exploitation are extremely limited. This study analysed data from a case series of anonymised case records of a consecutive sample of 35 men and women who had been trafficked for labor exploitation in the UK and who were receiving support from a non-governmental service between June 2009 and July 2010. Over three-quarters of our sample was male (77 %) and two-thirds aged between 18 and 35 years (mean 32.9 years, SD 10.2). Forty percent reported experiencing physical violence while they were trafficked. Eighty-one percent (25/31) reported one or more physical health symptoms. Fifty-seven percent (17/30) reported one or more post-traumatic stress symptoms. A substantial proportion of men and women who are trafficked for labor exploitation may experience violence and abuse, and have physical and mental health symptoms. People who have been trafficked for forced labor need access to medical assessment and treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimsiak, Tomasz, E-mail: tomas@mat.umk.pl; Rozkosz, Andrzej, E-mail: rozkosz@mat.umk.pl
In the paper we consider the problem of valuation of American options written on dividend-paying assets whose price dynamics follow the classical multidimensional Black and Scholes model. We provide a general early exercise premium representation formula for options with payoff functions which are convex or satisfy mild regularity assumptions. Examples include index options, spread options, call on max options, put on min options, multiply strike options and power-product options. In the proof of the formula we exploit close connections between the optimal stopping problems associated with valuation of American options, obstacle problems and reflected backward stochastic differential equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P.; Riensche, Roderick M.; Haack, Jereme N.
“Gamification”, the application of gameplay to real-world problems, enables the development of human computation systems that support decision-making through the integration of social and machine intelligence. One of gamification’s major benefits includes the creation of a problem solving environment where the influence of cognitive and cultural biases on human judgment can be curtailed through collaborative and competitive reasoning. By reducing biases on human judgment, gamification allows human computation systems to exploit human creativity relatively unhindered by human error. Operationally, gamification uses simulation to harvest human behavioral data that provide valuable insights for the solution of real-world problems.
An exploratory study of adolescent pimping relationships.
Anderson, Pamela M; Coyle, Karin K; Johnson, Anisha; Denner, Jill
2014-04-01
In the last decade, public attention to the problem of commercially sexually exploited children (CSEC) has grown. This exploratory qualitative study examines adolescent pimping relationships, including how urban youth perceive these types of relationships. Study data stem from interviews with three young adult informants with first-hand knowledge of adolescent pimping, as well as three gender-specific focus group discussions with a convenience sample of 26 urban high school students who have first- or second-hand knowledge of adolescent pimping. Findings indicate that respondents believe teen pimping exists in their schools and communities, and that those exploited typically do not self-identify as victims. Respondents also believed that younger pimps are more likely to use violence to induce compliance among the girls they exploit, whereas older pimps are more likely to emotionally manipulate young women into exploitation. Further, respondents indicated that some young people agreed to exchange or sell sex for money as a favor to their boyfriends or girlfriends, and some young people believed that selling sex is acceptable under certain circumstances. The growing attention to CSEC provides an important opportunity to expand prevention efforts to reach those most affected and at risk for exploitation. The findings highlight critical areas for augmenting traditional content in school-based HIV/STI and sexuality education classes.
Levinson, Nicholas M.; Boxer, Steven G.
2012-01-01
Chronic myeloid leukemia (CML) is caused by the kinase activity of the BCR-Abl fusion protein. The Abl inhibitors imatinib, nilotinib and dasatinib are currently used to treat CML, but resistance to these inhibitors is a significant clinical problem. The kinase inhibitor bosutinib has shown efficacy in clinical trials for imatinib-resistant CML, but its binding mode is unknown. We present the 2.4 Å structure of bosutinib bound to the kinase domain of Abl, which explains the inhibitor's activity against several imatinib-resistant mutants, and reveals that similar inhibitors that lack a nitrile moiety could be effective against the common T315I mutant. We also report that two distinct chemical compounds are currently being sold under the name “bosutinib”, and report spectroscopic and structural characterizations of both. We show that the fluorescence properties of these compounds allow inhibitor binding to be measured quantitatively, and that the infrared absorption of the nitrile group reveals a different electrostatic environment in the conserved ATP-binding sites of Abl and Src kinases. Exploiting such differences could lead to inhibitors with improved selectivity. PMID:22493660
Structural insights into drug development strategy targeting EGFR T790M/C797S.
Zhu, Su-Jie; Zhao, Peng; Yang, Jiao; Ma, Rui; Yan, Xiao-E; Yang, Sheng-Yong; Yang, Jing-Wen; Yun, Cai-Hong
2018-03-02
Treatment of non-small-cell lung cancers (NSCLCs) harboring primary EGFR oncogenic mutations such as L858R and exon 19 deletion delE746_A750 (Del-19) using gefitinib/erlotinib ultimately fails due to the emergence of T790M mutation. Though WZ4002/CO-1686/AZD9291 are effective in overcoming EGFR T790M by targeting Cys797 via covalent bonding, their efficacy is again limited due to the emergence of C797S mutation. New agents effectively inhibiting EGFR T790M without covalent linkage through Cys 797 may solve this problem. We presented here crystal structures of EGFR activating/drug-resistant mutants in complex with a panel of reversible inhibitors along with mutagenesis and enzyme kinetic data. These data revealed a previously un-described hydrophobic clamp structure in the EGFR kinase which may be exploited to facilitate development of next generation drugs targeting EGFR T790M with or without concomitant C797S. Interestingly, mutations in the hydrophobic clamp that hinder drug binding often also weaken ATP binding and/or abolish kinase activity, thus do not readily result in resistance to the drugs.
NASA Astrophysics Data System (ADS)
Maruccio, Claudio; Quaranta, Giuseppe; De Lorenzis, Laura; Monti, Giorgio
2016-08-01
Wireless monitoring could greatly impact the fields of structural health assessment and infrastructure asset management. A common problem to be tackled in wireless networks is the electric power supply, which is typically provided by batteries replaced periodically. A promising remedy for this issue would be to harvest ambient energy. Within this framework, the present paper proposes to harvest ambient-induced vibrations of bridge structures using a new class of piezoelectric textiles. The considered case study is an existing cable-stayed bridge located in Italy along a high-speed road that connects Rome and Naples, for which a recent monitoring campaign has allowed to record the dynamic responses of deck and cables. Vibration measurements have been first elaborated to provide a comprehensive dynamic assessment of this infrastructure. In order to enhance the electric energy that can be converted from ambient vibrations, the considered energy harvester exploits a power generator built using arrays of electrospun piezoelectric nanofibers. A finite element analysis is performed to demonstrate that such power generator is able to provide higher energy levels from recorded dynamic loading time histories than a standard piezoelectric energy harvester. Its feasibility for bridge health monitoring applications is finally discussed.
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
Wong, Dillon; Velasco, Jairo; Ju, Long; Lee, Juwon; Kahn, Salman; Tsai, Hsin-Zon; Germany, Chad; Taniguchi, Takashi; Watanabe, Kenji; Zettl, Alex; Wang, Feng; Crommie, Michael F
2015-11-01
Defects play a key role in determining the properties and technological applications of nanoscale materials and, because they tend to be highly localized, characterizing them at the single-defect level is of particular importance. Scanning tunnelling microscopy has long been used to image the electronic structure of individual point defects in conductors, semiconductors and ultrathin films, but such single-defect electronic characterization remains an elusive goal for intrinsic bulk insulators. Here, we show that individual native defects in an intrinsic bulk hexagonal boron nitride insulator can be characterized and manipulated using a scanning tunnelling microscope. This would typically be impossible due to the lack of a conducting drain path for electrical current. We overcome this problem by using a graphene/boron nitride heterostructure, which exploits the atomically thin nature of graphene to allow the visualization of defect phenomena in the underlying bulk boron nitride. We observe three different defect structures that we attribute to defects within the bulk insulating boron nitride. Using scanning tunnelling spectroscopy we obtain charge and energy-level information for these boron nitride defect structures. We also show that it is possible to manipulate the defects through voltage pulses applied to the scanning tunnelling microscope tip.
Poverty crisis in the Third World: the contradictions of World Bank policy.
Burkett, P
1991-01-01
Politicians, the mainstream media, and orthodox social science have all been telling us of a final victory of capitalism over socialism, suggesting that capitalism is the only viable option for solving the world's problems. Yet, the global capitalist system is itself entering the third decade of a profound structural crisis, the costs of which have been borne largely by the exploited and oppressed peoples of the underdeveloped periphery. While the World Bank's latest World Development Report recognizes the current poverty crisis in the third world, its "two-part strategy" for alleviating poverty is based on an inadequate analysis of how peripheral capitalist development marginalizes the basic needs of the third world poor. Hence, the World Bank's assertion that free-market policies are consistent with effective antipoverty programs does not confront the class structures and global capitalist interests bound up with the reproduction of mass poverty in the third world. The World Bank's subordination of the basic needs of the poor to free-market adjustments and reforms in fact suggests that the real purpose of its "two-part strategy" is to ensure continued extraction of surplus from third world countries by maintaining the basic structure of imperialist underdevelopment.
Goldenberg, Shira M.; Engstrom, David; Rolon, Maria Luisa; Silverman, Jay G.; Strathdee, Steffanie A.
2013-01-01
Globally, female sex workers are a population at greatly elevated risk of HIV infection, and the reasons for and context of sex industry involvement have key implications for HIV risk and prevention. Evidence suggests that experiences of sexual exploitation (i.e., forced/coerced sex exchange) contribute to health-related harms. However, public health interventions that address HIV vulnerability and sexual exploitation are lacking. Therefore, the objective of this study was to elicit recommendations for interventions to prevent sexual exploitation and reduce HIV risk from current female sex workers with a history of sexual exploitation or youth sex work. From 2010–2011, we conducted in-depth interviews with sex workers (n = 31) in Tijuana, Mexico who reported having previously experienced sexual exploitation or youth sex work. Participants recommended that interventions aim to (1) reduce susceptibility to sexual exploitation by providing social support and peer-based education; (2) mitigate harms by improving access to HIV prevention resources and psychological support, and reducing gender-based violence; and (3) provide opportunities to exit the sex industry via vocational supports and improved access to effective drug treatment. Structural interventions incorporating these strategies are recommended to reduce susceptibility to sexual exploitation and enhance capacities to prevent HIV infection among marginalized women and girls in Mexico and across international settings. PMID:24023661
Goldenberg, Shira M; Engstrom, David; Rolon, Maria Luisa; Silverman, Jay G; Strathdee, Steffanie A
2013-01-01
Globally, female sex workers are a population at greatly elevated risk of HIV infection, and the reasons for and context of sex industry involvement have key implications for HIV risk and prevention. Evidence suggests that experiences of sexual exploitation (i.e., forced/coerced sex exchange) contribute to health-related harms. However, public health interventions that address HIV vulnerability and sexual exploitation are lacking. Therefore, the objective of this study was to elicit recommendations for interventions to prevent sexual exploitation and reduce HIV risk from current female sex workers with a history of sexual exploitation or youth sex work. From 2010-2011, we conducted in-depth interviews with sex workers (n = 31) in Tijuana, Mexico who reported having previously experienced sexual exploitation or youth sex work. Participants recommended that interventions aim to (1) reduce susceptibility to sexual exploitation by providing social support and peer-based education; (2) mitigate harms by improving access to HIV prevention resources and psychological support, and reducing gender-based violence; and (3) provide opportunities to exit the sex industry via vocational supports and improved access to effective drug treatment. Structural interventions incorporating these strategies are recommended to reduce susceptibility to sexual exploitation and enhance capacities to prevent HIV infection among marginalized women and girls in Mexico and across international settings.
Structured background grids for generation of unstructured grids by advancing front method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
1991-01-01
A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.
NASA Astrophysics Data System (ADS)
Videnova-Adrabinska, V.; Etter, M. C.; Ward, M. D.
1993-04-01
The crystal structure and properties of a number of urea cocrystals are studied with regard to symmetry of the hydrogen-bonded molecular assemblies. The logical consequences of hydrogen bonding interactions are followed step-by-step. The problems of aggregate formation, nucleation, and crystal growth are also elucidated. Endeavor is made to envisage the 2-D and 3-D hydrogen bond network in a manageable way by exploiting graph set short hand. Strategies of how to control the symmetry of molecular packing are still to be elaborated. In our strategy, the programmed self-assembly has been based on the principle of molecular recognition of self- and hetero-complementary functional groups. However, the main focus for pre-organizational control has been put on the two-fold axis estimator of the urea molecule.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models
NASA Technical Reports Server (NTRS)
Manning, Valerie Michelle
1999-01-01
The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.
Seebeck, Thomas; Sterk, Geert Jan; Ke, Hengming
2011-01-01
Protozoan infections remain a major unsolved medical problem in many parts of our world. A major obstacle to their treatment is the blatant lack of medication that is affordable, effective, safe and easy to administer. For some of these diseases, including human sleeping sickness, very few compounds are available, many of them old and all of them fraught with toxic side effects. We explore a new concept for developing new-generation antiprotozoan drugs that are based on phosphodiesterase (PDE) inhibitors. Such inhibitors are already used extensively in human pharmacology. Given the high degree of structural similarity between the human and the protozoan PDEs, the vast expertise available in the human field can now be applied to developing disease-specific PDE inhibitors as new antiprotozoan drugs. PMID:21859303
Production of membrane proteins without cells or detergents.
Rajesh, Sundaresan; Knowles, Timothy; Overduin, Michael
2011-04-30
The production of membrane proteins in cellular systems is besieged by several problems due to their hydrophobic nature which often causes misfolding, protein aggregation and cytotoxicity, resulting in poor yields of stable proteins. Cell-free expression has emerged as one of the most versatile alternatives for circumventing these obstacles by producing membrane proteins directly into designed hydrophobic environments. Efficient optimisation of expression and solubilisation conditions using a variety of detergents, membrane mimetics and lipids has yielded structurally and functionally intact membrane proteins, with yields several fold above the levels possible from cell-based systems. Here we review recently developed techniques available to produce functional membrane proteins, and discuss amphipols, nanodisc and styrene maleic acid lipid particle (SMALP) technologies that can be exploited alongside cell-free expression of membrane proteins. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Wei; Changzhong Jiang, Affc; Roy, Vellaisamy A. L.
2014-11-01
Photocatalytic degradation of toxic organic pollutants is a challenging tasks in ecological and environmental protection. Recent research shows that the magnetic iron oxide-semiconductor composite photocatalytic system can effectively break through the bottleneck of single-component semiconductor oxides with low activity under visible light and the challenging recycling of the photocatalyst from the final products. With high reactivity in visible light, magnetic iron oxide-semiconductors can be exploited as an important magnetic recovery photocatalyst (MRP) with a bright future. On this regard, various composite structures, the charge-transfer mechanism and outstanding properties of magnetic iron oxide-semiconductor composite nanomaterials are sketched. The latest synthesis methods and recent progress in the photocatalytic applications of magnetic iron oxide-semiconductor composite nanomaterials are reviewed. The problems and challenges still need to be resolved and development strategies are discussed.
Flow Navigation by Smart Microswimmers via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Colabrese, Simona; Biferale, Luca; Celani, Antonio; Gustavsson, Kristian
2017-11-01
We have numerically modeled active particles which are able to acquire some limited knowledge of the fluid environment from simple mechanical cues and exert a control on their preferred steering direction. We show that those swimmers can learn effective strategies just by experience, using a reinforcement learning algorithm. As an example, we focus on smart gravitactic swimmers. These are active particles whose task is to reach the highest altitude within some time horizon, exploiting the underlying flow whenever possible. The reinforcement learning algorithm allows particles to learn effective strategies even in difficult situations when, in the absence of control, they would end up being trapped by flow structures. These strategies are highly nontrivial and cannot be easily guessed in advance. This work paves the way towards the engineering of smart microswimmers that solve difficult navigation problems. ERC AdG NewTURB 339032.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
Structure-aware depth super-resolution using Gaussian mixture model
NASA Astrophysics Data System (ADS)
Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon
2015-03-01
This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
Moriarty, John; McVicar, Duncan; Higgins, Kathryn
2016-08-01
Peer effects in adolescent cannabis are difficult to estimate, due in part to the lack of appropriate data on behaviour and social ties. This paper exploits survey data that have many desirable properties and have not previously been used for this purpose. The data set, collected from teenagers in three annual waves from 2002 to 2004 contains longitudinal information about friendship networks within schools (N = 5020). We exploit these data on network structure to estimate peer effects on adolescents from their nominated friends within school using two alternative approaches to identification. First, we present a cross-sectional instrumental variable (IV) estimate of peer effects that exploits network structure at the second degree, i.e. using information on friends of friends who are not themselves ego's friends to instrument for the cannabis use of friends. Second, we present an individual fixed effects estimate of peer effects using the full longitudinal structure of the data. Both innovations allow a greater degree of control for correlated effects than is commonly the case in the substance-use peer effects literature, improving our chances of obtaining estimates of peer effects than can be plausibly interpreted as causal. Both estimates suggest positive peer effects of non-trivial magnitude, although the IV estimate is imprecise. Furthermore, when we specify identical models with behaviour and characteristics of randomly selected school peers in place of friends', we find effectively zero effect from these 'placebo' peers, lending credence to our main estimates. We conclude that cross-sectional data can be used to estimate plausible positive peer effects on cannabis use where network structure information is available and appropriately exploited. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pyak, P. E.; Usachenko, V. I.
2018-03-01
The phenomenon of pronounced peak structure(s) of longitudinal momentum distributions as well as a spike-like structure of low-energy spectra of photoelectrons emitted from laser-irradiated Ar and Ne atoms in a single ionization process is theoretically studied in the tunneling and multiphoton regimes of ionization. The problem is addressed assuming only the direct above-threshold ionization (ATI) as a physical mechanism underlying the phenomenon under consideration (viz. solely contributing to observed photoelectron momentum distributions (PMD)) and using the Coulomb-Volkov (CV) ansatz within the frame of conventional strong-field approximation (SFA) applied in the length-gauge formulation. The developed CV-SFA approach also incorporates the density functional theory essentially exploited for numerical composition of initial (laser-free) atomic state(s) constructed from atomic orbitals of Gaussian type. Our presented CV-SFA based (and laser focal-volume averaged) calculation results proved to be well reproducing both the pronounced double-peak and/or ATI-like multi-peak structure(s) experimentally observed in longitudinal PMD under conditions of tunneling and/or multiphoton regime, respectively. In addition, our CV-SFA results presented for tunneling regime also suggest and remarkably reproduce a pronounced structure observed in relevant experiments as a ‘spike-like’ enhanced maximum arising in low-energy region (around the value of about 1 eV) of photoelectron spectra. The latter consistency allows to identify and interpret these results as the so-called low-energy structure (LES) since the phenomenon proved to appear as the most prominent if the influence of Coulomb potential on photoelectron continuum states is maximally taken into account under calculations (viz. if the parameter Z in CV’s functions is put equal to 1). Moreover, the calculated LES proved to correspond (viz., established as closely related) to the mentioned double-peak structure arising in the low-momentum region ({p}| | ≤slant | 0.2| a.u.) of longitudinal PMDs calculated under condition of the tunneling regime. Thus, the phenomena under consideration can be well understood and adequately interpreted beyond the terms and/or concepts of various different alternative strong-field approaches and models (such as e.g., extensively invoked and exploited nowadays though, more sophisticated SFA-based ‘rescattering’ mechanism) compared to which, the currently applied CV-SFA model (through the same underlying physical mechanism of solely direct ATI suggested) is additionally able to provide and reveal an intimate and transparent interrelation between the phenomena of LES and double-peak structure arising in PMDs observed in the tunneling regime.
Vantage Theory and Diachronic Semantics.
ERIC Educational Resources Information Center
Winters, Margaret E.
2002-01-01
Exploits the diachronic potential of vantage theory with psychologists' notion of framing. Compares vantage theory and cognitive grammar, based on the analysis of a particular problem in the history of French, the development of the negator "pas" (not) from a full noun. (Author/VWL)
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
2011-01-01
One bottleneck in NMR structure determination lies in the laborious and time-consuming process of side-chain resonance and NOE assignments. Compared to the well-studied backbone resonance assignment problem, automated side-chain resonance and NOE assignments are relatively less explored. Most NOE assignment algorithms require nearly complete side-chain resonance assignments from a series of through-bond experiments such as HCCH-TOCSY or HCCCONH. Unfortunately, these TOCSY experiments perform poorly on large proteins. To overcome this deficiency, we present a novel algorithm, called NASCA (NOE Assignment and Side-Chain Assignment), to automate both side-chain resonance and NOE assignments and to perform high-resolution protein structure determination in the absence of any explicit through-bond experiment to facilitate side-chain resonance assignment, such as HCCH-TOCSY. After casting the assignment problem into a Markov Random Field (MRF), NASCA extends and applies combinatorial protein design algorithms to compute optimal assignments that best interpret the NMR data. The MRF captures the contact map information of the protein derived from NOESY spectra, exploits the backbone structural information determined by RDCs, and considers all possible side-chain rotamers. The complexity of the combinatorial search is reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is employed to find a set of optimal side-chain resonance assignments that best fit the NMR data. These side-chain resonance assignments are then used to resolve the NOE assignment ambiguity and compute high-resolution protein structures. Tests on five proteins show that NASCA assigns resonances for more than 90% of side-chain protons, and achieves about 80% correct assignments. The final structures computed using the NOE distance restraints assigned by NASCA have backbone RMSD 0.8 – 1.5 Å from the reference structures determined by traditional NMR approaches. PMID:21706248
Assessing exploitation experiences of girls and boys seen at a Child Advocacy Center
Edinburgh, Laurel; Pape-Blabolil, Julie; Harpin, Scott B.; Saewyc, Elizabeth
2015-01-01
The primary aim of this study was to describe the abuse experiences of sexually exploited runaway adolescents seen at a Child Advocacy Center (N = 62). We also sought to identify risk behaviors, attributes of resiliency, laboratory results for sexually transmitted infection (STI) screens, and genital injuries from colposcopic exams. We used retrospective mixed-methods with in-depth forensic interviews, together with self-report survey responses, physical exams and chart data. Forensic interviews were analyzed using interpretive description analytical methods along domains of experience and meaning of sexual exploitation events. Univariate descriptive statistics characterized trauma responses and health risks. The first sexual exploitation events for many victims occurred as part of seemingly random encounters with procurers. Older adolescent or adult women recruited some youth working for a pimp. However, half the youth did not report a trafficker involved in setting up their exchange of sex for money, substances, or other types of consideration. 78% scored positive on the UCLA PTSD tool; 57% reported DSM IV criteria for problem substance use; 71% reported cutting behaviors, 75% suicidal ideation, and 50% had attempted suicide. Contrary to common depictions, youth may be solicited relatively quickly as runaways, yet exploitation is not always linked to having a pimp. Avoidant coping does not appear effective, as most patients exhibited significant symptoms of trauma. Awareness of variations in youth’s sexual exploitation experiences may help researchers and clinicians understand potential differences in sequelae, design effective treatment plans, and develop community prevention programs. PMID:25982287
ERIC Educational Resources Information Center
Ahart, Gregory J.
This report contains the results of an extensive literature search by the General Accounting Office (GAO) on the subject of teenage prostitution and child pornography and federal, state and local efforts to deal with the problem. Also included are results of a survey of police departments and mayors' offices of the 22 largest U.S. cities and all…
Reasoning by analogy as an aid to heuristic theorem proving.
NASA Technical Reports Server (NTRS)
Kling, R. E.
1972-01-01
When heuristic problem-solving programs are faced with large data bases that contain numbers of facts far in excess of those needed to solve any particular problem, their performance rapidly deteriorates. In this paper, the correspondence between a new unsolved problem and a previously solved analogous problem is computed and invoked to tailor large data bases to manageable sizes. This paper outlines the design of an algorithm for generating and exploiting analogies between theorems posed to a resolution-logic system. These algorithms are believed to be the first computationally feasible development of reasoning by analogy to be applied to heuristic theorem proving.
NASA Astrophysics Data System (ADS)
Zeng, Shengda; Migórski, Stanisław
2018-03-01
In this paper a class of elliptic hemivariational inequalities involving the time-fractional order integral operator is investigated. Exploiting the Rothe method and using the surjectivity of multivalued pseudomonotone operators, a result on existence of solution to the problem is established. Then, this abstract result is applied to provide a theorem on the weak solvability of a fractional viscoelastic contact problem. The process is quasistatic and the constitutive relation is modeled with the fractional Kelvin-Voigt law. The friction and contact conditions are described by the Clarke generalized gradient of nonconvex and nonsmooth functionals. The variational formulation of this problem leads to a fractional hemivariational inequality.
A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem
NASA Astrophysics Data System (ADS)
Jäger, Gerold; Zhang, Weixiong
The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.
NASA Astrophysics Data System (ADS)
Kchikach, Azzouz; Andrieux, Pierre; Jaffal, Mohammed; Amrhar, Mostafa; Mchichi, Mohammed; Boya, Baadi; Amaghzaz, Mbarek; Veyrieras, Thierry; Iqizou, Khadija
2006-05-01
Exploitation of the phosphatic layers in Sidi Chennane deposit (Morocco) collides frequently with problems bound to the existence, in the phosphatic series, of sterile bodies qualified as derangements. Our study shows that these bodies, masked by the Quaternary cover can be mapped using the Time-Domain ElectroMagnetic Soundings method (TDEM). It is based on the acquisition and the interpretation of a series of tests carried out above a visible derangement in an old trench of exploitation and on 2500 TDEM soundings carried out in a virgin area of the deposit. The article concerns to the analysis of the results and of the proceeding for a possible large geophysics survey. To cite this article: A. Kchikach et al., C. R. Geoscience 338 (2006).
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054
Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani
2015-01-01
The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement.
Applied technology for mine waste water decontamination in the uranium ores extraction from Romania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bejenaru, C.; Filip, G.; Vacariu, V.T.
1996-12-31
The exploitation of uranium ores in Romania is carried out in underground mines. In all exploited uranium deposits, mine waste waters results and will still result after the closure of uranium ore extraction activity. The mine waters are radioactively contaminated with uranium and its decay products being a hazard both for underground waters as for the environment. This paper present the results of research work carried out by authors for uranium elimination from waste waters as the problems involved during the exploitation process of the existent equipment as its maintenance in good experimental conditions. The main waste water characteristics aremore » discussed: solids as suspension, uranium, radium, mineral salts, pH, etc. The moist suitable way to eliminate uranium from mine waste waters is the ion exchange process based on ion exchangers in fluidized bed. A flowsheet is given with main advantages resulted.« less
Kalogerakis, Nicolas; Arff, Johanne; Banat, Ibrahim M; Broch, Ole Jacob; Daffonchio, Daniele; Edvardsen, Torgeir; Eguiraun, Harkaitz; Giuliano, Laura; Handå, Aleksander; López-de-Ipiña, Karmele; Marigomez, Ionan; Martinez, Iciar; Øie, Gunvor; Rojo, Fernando; Skjermo, Jorunn; Zanaroli, Giulio; Fava, Fabio
2015-01-25
In light of the Marine Strategy Framework Directive (MSFD) and the EU Thematic Strategy on the Sustainable Use of Natural Resources, environmental biotechnology could make significant contributions in the exploitation of marine resources and addressing key marine environmental problems. In this paper 14 propositions are presented focusing on (i) the contamination of the marine environment, and more particularly how to optimize the use of biotechnology-related tools and strategies for predicting and monitoring contamination and developing mitigation measures; (ii) the exploitation of the marine biological and genetic resources to progress with the sustainable, eco-compatible use of the maritime space (issues are very diversified and include, for example, waste treatment and recycling, anti-biofouling agents; bio-plastics); (iii) environmental/marine biotechnology as a driver for a sustainable economic growth. Copyright © 2014 Elsevier B.V. All rights reserved.
Exploration and Exploitation During Sequential Search
Dam, Gregory; Körding, Konrad
2012-01-01
When we learn how to throw darts we adjust how we throw based on where the darts stick. Much of skill learning is computationally similar in that we learn using feedback obtained after the completion of individual actions. We can formalize such tasks as a search problem; among the set of all possible actions, find the action that leads to the highest reward. In such cases our actions have two objectives: we want to best utilize what we already know (exploitation), but we also want to learn to be more successful in the future (exploration). Here we tested how participants learn movement trajectories where feedback is provided as a monetary reward that depends on the chosen trajectory. We mathematically derived the optimal search policy for our experiment using decision theory. The search behavior of participants is well predicted by an ideal searcher model that optimally combines exploration and exploitation. PMID:21585479
NASA Astrophysics Data System (ADS)
Handy, C. R.
2006-03-01
There has been renewed interest in the exploitation of Barta's configuration space theorem (BCST) (Barta 1937 C. R. Acad. Sci. Paris 204 472) which bounds the ground-state energy, Inf_x\\big({{H\\Phi(x)}\\over {\\Phi(x)}} \\big ) \\leq E_gr \\leq Sup_x \\big({{H\\Phi(x)}\\over {\\Phi(x)}}\\big) , by using any Φ lying within the space of positive, bounded, and sufficiently smooth functions, {\\cal C} . Mouchet's (Mouchet 2005 J. Phys. A: Math. Gen. 38 1039) BCST analysis is based on gradient optimization (GO). However, it overlooks significant difficulties: (i) appearance of multi-extrema; (ii) inefficiency of GO for stiff (singular perturbation/strong coupling) problems; (iii) the nonexistence of a systematic procedure for arbitrarily improving the bounds within {\\cal C} . These deficiencies can be corrected by transforming BCST into a moments' representation equivalent, and exploiting a generalization of the eigenvalue moment method (EMM), within the context of the well-known generalized eigenvalue problem (GEP), as developed here. EMM is an alternative eigenenergy bounding, variational procedure, overlooked by Mouchet, which also exploits the positivity of the desired physical solution. Furthermore, it is applicable to Hermitian and non-Hermitian systems with complex-number quantization parameters (Handy and Bessis 1985 Phys. Rev. Lett. 55 931, Handy et al 1988 Phys. Rev. Lett. 60 253, Handy 2001 J. Phys. A: Math. Gen. 34 5065, Handy et al 2002 J. Phys. A: Math. Gen. 35 6359). Our analysis exploits various quasi-convexity/concavity theorems common to the GEP representation. We outline the general theory, and present some illustrative examples.
An effective PSO-based memetic algorithm for flow shop scheduling.
Liu, Bo; Wang, Ling; Jin, Yi-Hui
2007-02-01
This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
NASA Technical Reports Server (NTRS)
Novik, Dmitry A.; Tilton, James C.
1993-01-01
The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.
2015-09-30
Wireless Networks (WUWNet’14), Rome, Italy, Nov. 12 14, 2014. J. Preisig, “ Underwater Acoustic Communications: Enabling the Next Generation at the...on Wireless Communication. M. Pajovic, J. Preisig, “Performance Analytics and Optimal Design of Multichannel Equalizers for Underwater Acoustic Communications”, to appear in IEEE Journal of Oceanic Engineering. 6 ...Exploiting Structured Dependencies in the Design of Adaptive Algorithms for Underwater Communication Award #3
DOT National Transportation Integrated Search
2015-11-01
One of the most efficient ways to solve the damage detection problem using the statistical pattern recognition : approach is that of exploiting the methods of outlier analysis. Cast within the pattern recognition framework, : damage detection assesse...