Contemporary Religious Conflicts and Religious Education in the Republic of Korea
ERIC Educational Resources Information Center
Kim, Chongsuh
2007-01-01
The Republic of (South) Korea is a multi-religious society. Naturally, large- or small-scale conflicts arise between religious groups. Moreover, inter-religious troubles related to the educational system, such as educational ideologies, textbook content and forced chapel attendance, have often caused social conflicts. Most of the problems derive…
Poincaré-Treshchev Mechanism in Multi-scale, Nearly Integrable Hamiltonian Systems
NASA Astrophysics Data System (ADS)
Xu, Lu; Li, Yong; Yi, Yingfei
2018-02-01
This paper is a continuation to our work (Xu et al. in Ann Henri Poincaré 18(1):53-83, 2017) concerning the persistence of lower-dimensional tori on resonant surfaces of a multi-scale, nearly integrable Hamiltonian system. This type of systems, being properly degenerate, arise naturally in planar and spatial lunar problems of celestial mechanics for which the persistence problem ties closely to the stability of the systems. For such a system, under certain non-degenerate conditions of Rüssmann type, the majority persistence of non-resonant tori and the existence of a nearly full measure set of Poincaré non-degenerate, lower-dimensional, quasi-periodic invariant tori on a resonant surface corresponding to the highest order of scale is proved in Han et al. (Ann Henri Poincaré 10(8):1419-1436, 2010) and Xu et al. (2017), respectively. In this work, we consider a resonant surface corresponding to any intermediate order of scale and show the existence of a nearly full measure set of Poincaré non-degenerate, lower-dimensional, quasi-periodic invariant tori on the resonant surface. The proof is based on a normal form reduction which consists of a finite step of KAM iterations in pushing the non-integrable perturbation to a sufficiently high order and the splitting of resonant tori on the resonant surface according to the Poincaré-Treshchev mechanism.
Naturalness from a composite top?
Pierce, Aaron; Zhao, Yue
2017-01-12
Here, we consider a theory with composite top quarks but an elementary Higgs boson. The hierarchy problem can be solved by supplementing TeV scale top compositeness with either supersymmetry or Higgs compositeness appearing at the multi-TeV scale. Furthermore, the Higgs boson couples to uncolored partons within the top quark. We also study how this approach can give rise to a novel screening effect that suppresses production of the colored top partners at the LHC. Strong constraints arise from Z tomore » $$\\bar{b}$$b, as well potentially from avor physics. Independent of flavor considerations, current constraints imply a compositeness scale &TeV; this implies that the model is likely tuned at the percent level. Four top quark production at the LHC is a smoking-gun probe of this scenario. New CP violation in D meson mixing is also possible.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Aaron; Zhao, Yue
Here, we consider a theory with composite top quarks but an elementary Higgs boson. The hierarchy problem can be solved by supplementing TeV scale top compositeness with either supersymmetry or Higgs compositeness appearing at the multi-TeV scale. Furthermore, the Higgs boson couples to uncolored partons within the top quark. We also study how this approach can give rise to a novel screening effect that suppresses production of the colored top partners at the LHC. Strong constraints arise from Z tomore » $$\\bar{b}$$b, as well potentially from avor physics. Independent of flavor considerations, current constraints imply a compositeness scale &TeV; this implies that the model is likely tuned at the percent level. Four top quark production at the LHC is a smoking-gun probe of this scenario. New CP violation in D meson mixing is also possible.« less
Multiple utility constrained multi-objective programs using Bayesian theory
NASA Astrophysics Data System (ADS)
Abbasian, Pooneh; Mahdavi-Amiri, Nezam; Fazlollahtabar, Hamed
2018-03-01
A utility function is an important tool for representing a DM's preference. We adjoin utility functions to multi-objective optimization problems. In current studies, usually one utility function is used for each objective function. Situations may arise for a goal to have multiple utility functions. Here, we consider a constrained multi-objective problem with each objective having multiple utility functions. We induce the probability of the utilities for each objective function using Bayesian theory. Illustrative examples considering dependence and independence of variables are worked through to demonstrate the usefulness of the proposed model.
NASA Astrophysics Data System (ADS)
Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic
2017-03-01
This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Solving multi-leader-common-follower games.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leyffer, S.; Munson, T.; Mathematics and Computer Science
Multi-leader-common-follower games arise when modelling two or more competitive firms, the leaders, that commit to their decisions prior to another group of competitive firms, the followers, that react to the decisions made by the leaders. These problems lead in a natural way to equilibrium problems with equilibrium constraints (EPECs). We develop a characterization of the solution sets for these problems and examine a variety of nonlinear optimization and nonlinear complementarity formulations of EPECs. We distinguish two broad cases: problems where the leaders can cost-differentiate and problems with price-consistent followers. We demonstrate the practical viability of our approach by solving amore » range of medium-sized test problems.« less
Vogel, Curtis R; Yang, Qiang
2006-08-21
We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
Singh, Brajesh K; Srivastava, Vineet K
2015-04-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.
Singh, Brajesh K.; Srivastava, Vineet K.
2015-01-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
Multi-Scale/Multi-Functional Probabilistic Composite Fatigue
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2008-01-01
A multi-level (multi-scale/multi-functional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.
Innovative architectures for dense multi-microprocessor computers
NASA Technical Reports Server (NTRS)
Larson, Robert E.
1989-01-01
The purpose is to summarize a Phase 1 SBIR project performed for the NASA/Langley Computational Structural Mechanics Group. The project was performed from February to August 1987. The main objectives of the project were to: (1) expand upon previous research into the application of chordal ring architectures to the general problem of designing multi-microcomputer architectures, (2) attempt to identify a family of chordal rings such that each chordal ring can be simply expanded to produce the next member of the family, (3) perform a preliminary, high-level design of an expandable multi-microprocessor computer based upon chordal rings, (4) analyze the potential use of chordal ring based multi-microprocessors for sparse matrix problems and other applications arising in computational structural mechanics.
The Effect of Normalization in Violence Video Classification Performance
NASA Astrophysics Data System (ADS)
Ali, Ashikin; Senan, Norhalina
2017-08-01
Basically, data pre-processing is an important part of data mining. Normalization is a pre-processing stage for any type of problem statement, especially in video classification. Challenging problems that arises in video classification is because of the heterogeneous content, large variations in video quality and complex semantic meanings of the concepts involved. Therefore, to regularize this problem, it is thoughtful to ensure normalization or basically involvement of thorough pre-processing stage aids the robustness of classification performance. This process is to scale all the numeric variables into certain range to make it more meaningful for further phases in available data mining techniques. Thus, this paper attempts to examine the effect of 2 normalization techniques namely Min-max normalization and Z-score in violence video classifications towards the performance of classification rate using Multi-layer perceptron (MLP) classifier. Using Min-Max Normalization range of [0,1] the result shows almost 98% of accuracy, meanwhile Min-Max Normalization range of [-1,1] accuracy is 59% and for Z-score the accuracy is 50%.
NASA Astrophysics Data System (ADS)
Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale
2013-04-01
A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.
NASA Astrophysics Data System (ADS)
Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii
2017-02-01
Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.
Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling
NASA Astrophysics Data System (ADS)
Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.
2014-12-01
Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.
The problem with multiple robots
NASA Technical Reports Server (NTRS)
Huber, Marcus J.; Kenny, Patrick G.
1994-01-01
The issues that can arise in research associated with multiple, robotic agents are discussed. Two particular multi-robot projects are presented as examples. This paper was written in the hope that it might ease the transition from single to multiple robot research.
Multi-task Gaussian process for imputing missing data in multi-trait and multi-environment trials.
Hori, Tomoaki; Montcho, David; Agbangla, Clement; Ebana, Kaworu; Futakuchi, Koichi; Iwata, Hiroyoshi
2016-11-01
A method based on a multi-task Gaussian process using self-measuring similarity gave increased accuracy for imputing missing phenotypic data in multi-trait and multi-environment trials. Multi-environmental trial (MET) data often encounter the problem of missing data. Accurate imputation of missing data makes subsequent analysis more effective and the results easier to understand. Moreover, accurate imputation may help to reduce the cost of phenotyping for thinned-out lines tested in METs. METs are generally performed for multiple traits that are correlated to each other. Correlation among traits can be useful information for imputation, but single-trait-based methods cannot utilize information shared by traits that are correlated. In this paper, we propose imputation methods based on a multi-task Gaussian process (MTGP) using self-measuring similarity kernels reflecting relationships among traits, genotypes, and environments. This framework allows us to use genetic correlation among multi-trait multi-environment data and also to combine MET data and marker genotype data. We compared the accuracy of three MTGP methods and iterative regularized PCA using rice MET data. Two scenarios for the generation of missing data at various missing rates were considered. The MTGP performed a better imputation accuracy than regularized PCA, especially at high missing rates. Under the 'uniform' scenario, in which missing data arise randomly, inclusion of marker genotype data in the imputation increased the imputation accuracy at high missing rates. Under the 'fiber' scenario, in which missing data arise in all traits for some combinations between genotypes and environments, the inclusion of marker genotype data decreased the imputation accuracy for most traits while increasing the accuracy in a few traits remarkably. The proposed methods will be useful for solving the missing data problem in MET data.
A Jubilant Connection: General Jubal Early's Troops and the Golden Ratio
ERIC Educational Resources Information Center
Bolte, Linda A.; Noon, Tim R., Jr.
2012-01-01
The golden ratio, one of the most beautiful numbers in all of mathematics, arises in some surprising places. At first glance, we might expect that a General checking his troops' progress would be nothing more than a basic distance-rate-time problem. However, further exploration reveals a multi-faceted problem, one in which the ratio of rates…
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
NASA Astrophysics Data System (ADS)
Hossenfelder, Sabine
2014-07-01
The idea that Lorentz-symmetry in momentum space could be modified but still remain observer-independent has received quite some attention in the recent years. This modified Lorentz-symmetry, which has been argued to arise in Loop Quantum Gravity, is being used as a phenomenological model to test possibly observable effects of quantum gravity. The most pressing problem in these models is the treatment of multi-particle states, known as the 'soccer-ball problem'. This article briefly reviews the problem and the status of existing solution attempts.
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhakal, Tilak Raj
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less
NASA Astrophysics Data System (ADS)
Tejedor, A.; Longjas, A.; Foufoula-Georgiou, E.
2017-12-01
Previous work [e.g. Tejedor et al., 2016 - GRL] has demonstrated the potential of using graph theory to study key properties of the structure and dynamics of river delta channel networks. Although the distribution of fluxes in river deltas is mostly driven by the connectivity of its channel network a significant part of the fluxes might also arise from connectivity between the channels and islands due to overland flow and seepage. This channel-island-subsurface interaction creates connectivity pathways which facilitate or inhibit transport depending on their degree of coupling. The question we pose here is how to collectively study system connectivity that emerges from the aggregated action of different processes (different in nature, intensity and time scales). Single-layer graphs as those introduced for delta channel networks are inadequate as they lack the ability to represent coupled processes, and neglecting across-process interactions can lead to mis-representation of the overall system dynamics. We present here a framework that generalizes the traditional representation of networks (single-layer graphs) to the so-called multi-layer networks or multiplex. A multi-layer network conceptualizes the overall connectivity arising from different processes as distinct graphs (layers), while allowing at the same time to represent interactions between layers by introducing interlayer links (across process interactions). We illustrate this framework using a study of the joint connectivity that arises from the coupling of the confined flow on the channel network and the overland flow on islands, on a prototype delta. We show the potential of the multi-layer framework to answer quantitatively questions related to the characteristic time scales to steady-state transport in the system as a whole when different levels of channel-island coupling are modulated by different magnitudes of discharge rates.
PREFACE: XXIst International Symposium on the Jahn-Teller Effect 2012
NASA Astrophysics Data System (ADS)
Koizumi, Hiroyasu
2013-04-01
(The PDF contains the full conference program, the list of sponsors and the conference poster.) The 21st International Symposium on the Jahn-Teller effect was held at the University of Tsukuba, Japan, from 26-31 August 2012. People from 23 different countries participated and the number of registered participants was 118. In this symposium, the phrase 'Jahn-Teller effect' was taken to have a rather broad meaning. We discussed the Jahn-Teller and pseudo Jahn-Teller distortions. We also discussed general vibronic problems, and the problems associated with the conical intersections of the potential energy surfaces. As is indicated in the subtitle of the present symposium, 'Physics and Chemistry of Symmetry Breaking', a number of different topics concerning symmetry breaking were also extensively discussed. In particular, we had many discussions on magnetism, ferroelectricity, and superconductivity. A subtle but important problem that was dealt with was the appearance of multi-valuedness in the use of multi-component wave functions. In the Jahn-Teller problems, we almost always use the multi-component wave functions, thus, the knowledge of the proper handling of multi-valuedness is very important. Digital computers are not good at dealing with multi-valuedness, but we need to somehow handle it in our calculations. A very well known example of successful handling is found in the problem of the molecular system with the conical intersection: we cannot obtain the solution that satisfies the single-valuedness of wave functions (SVWF) just using the potential energy surface generated by a package program, and solving the Schrödinger equation with the quantum Hamiltonian constructed from the classical counterpart by replacing the classical variables with the corresponding operators; however, if a gauge potential is included and the double-valuedness of the electronic wave functions around the conical intersections is taken into account, the solution that satisfies the SVWF is obtained. A related problem also arises when dealing with the so-called adiabatic-diabatic transformation (ADT) that removes coupling terms between different Born-Oppenheimer electronic states. It is known that an exact ADT does not exist in general, however, digital computers do this impossible task erroneously if we just plug in numbers. The results obtained may be good in practice; however, we need to be aware that such calculations may miss some important details. I asked Professor Mead to write a note on this matter since there is still confusion in the treatment of the ADT. The proper handling on the ADT may be a topic in the next Jahn-Teller symposium. Although more than a quarter of a century has passed since its discovery, the mechanism of cuprate superconductivity is still actively discussed. In the cuprate, the multi-valuedness problem arises when the conduction electrons create spin-vortices and the twisting of the spin basis occurs. Since a number of experiments and theories indicate the presence of spin-vortices in the cuprate, a proper handling of the multi-valuedness arising from the spin-degree-of-freedom will be important. It has been argued that such multi-valuedness induces a vector potential that generates the persistent current. As the papers in this proceedings indicate, the Jahn-Teller effects are ubiquitous in physics and chemistry. The ideas and methodologies developed in this community have very wide applicability. I believe that this community will continue to contribute to the advancement of science in a fundamental way. Hiroyasu Koizumi Tsukuba, February 2013 Conference photograph
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Gradient design for liquid chromatography using multi-scale optimization.
López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C
2018-01-26
In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li; He, Ya-Ling; Kang, Qinjun
2013-12-15
A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less
Hemmelmayr, Vera C.; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-01-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP. PMID:23483764
Hemmelmayr, Vera C; Cordeau, Jean-François; Crainic, Teodor Gabriel
2012-12-01
In this paper, we propose an adaptive large neighborhood search heuristic for the Two-Echelon Vehicle Routing Problem (2E-VRP) and the Location Routing Problem (LRP). The 2E-VRP arises in two-level transportation systems such as those encountered in the context of city logistics. In such systems, freight arrives at a major terminal and is shipped through intermediate satellite facilities to the final customers. The LRP can be seen as a special case of the 2E-VRP in which vehicle routing is performed only at the second level. We have developed new neighborhood search operators by exploiting the structure of the two problem classes considered and have also adapted existing operators from the literature. The operators are used in a hierarchical scheme reflecting the multi-level nature of the problem. Computational experiments conducted on several sets of instances from the literature show that our algorithm outperforms existing solution methods for the 2E-VRP and achieves excellent results on the LRP.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
NASA Astrophysics Data System (ADS)
Diamantopoulos, Theodore; Rowe, Kristopher; Diamessis, Peter
2017-11-01
The Collocation Penalty Method (CPM) solves a PDE on the interior of a domain, while weakly enforcing boundary conditions at domain edges via penalty terms, and naturally lends itself to high-order and multi-domain discretization. Such spectral multi-domain penalty methods (SMPM) have been used to solve the Navier-Stokes equations. Bounds for penalty coefficients are typically derived using the energy method to guarantee stability for time-dependent problems. The choice of collocation points and penalty parameter can greatly affect the conditioning and accuracy of a solution. Effort has been made in recent years to relate various high-order methods on multiple elements or domains under the umbrella of the Correction Procedure via Reconstruction (CPR). Most applications of CPR have focused on solving the compressible Navier-Stokes equations using explicit time-stepping procedures. A particularly important aspect which is still missing in the context of the SMPM is a study of the Helmholtz equation arising in many popular time-splitting schemes for the incompressible Navier-Stokes equations. Stability and convergence results for the SMPM for the Helmholtz equation will be presented. Emphasis will be placed on the efficiency and accuracy of high-order methods.
Algorithms for bilevel optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.
Cotter, C J; Gottwald, G A; Holm, D D
2017-09-01
In Holm (Holm 2015 Proc. R. Soc. A 471 , 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow.
Efficient multitasking of Choleski matrix factorization on CRAY supercomputers
NASA Technical Reports Server (NTRS)
Overman, Andrea L.; Poole, Eugene L.
1991-01-01
A Choleski method is described and used to solve linear systems of equations that arise in large scale structural analysis. The method uses a novel variable-band storage scheme and is structured to exploit fast local memory caches while minimizing data access delays between main memory and vector registers. Several parallel implementations of this method are described for the CRAY-2 and CRAY Y-MP computers demonstrating the use of microtasking and autotasking directives. A portable parallel language, FORCE, is used for comparison with the microtasked and autotasked implementations. Results are presented comparing the matrix factorization times for three representative structural analysis problems from runs made in both dedicated and multi-user modes on both computers. CPU and wall clock timings are given for the parallel implementations and are compared to single processor timings of the same algorithm.
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
An improved KCF tracking algorithm based on multi-feature and multi-scale
NASA Astrophysics Data System (ADS)
Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye
2018-02-01
The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.
A Nonparametric Framework for Comparing Trends and Gaps across Tests
ERIC Educational Resources Information Center
Ho, Andrew Dean
2009-01-01
Problems of scale typically arise when comparing test score trends, gaps, and gap trends across different tests. To overcome some of these difficulties, test score distributions on the same score scale can be represented by nonparametric graphs or statistics that are invariant under monotone scale transformations. This article motivates and then…
Learning to Predict Combinatorial Structures
NASA Astrophysics Data System (ADS)
Vembu, Shankar
2009-12-01
The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
NASA Astrophysics Data System (ADS)
Casadei, F.; Ruzzene, M.
2011-04-01
This work illustrates the possibility to extend the field of application of the Multi-Scale Finite Element Method (MsFEM) to structural mechanics problems that involve localized geometrical discontinuities like cracks or notches. The main idea is to construct finite elements with an arbitrary number of edge nodes that describe the actual geometry of the damage with shape functions that are defined as local solutions of the differential operator of the specific problem according to the MsFEM approach. The small scale information are then brought to the large scale model through the coupling of the global system matrices that are assembled using classical finite element procedures. The efficiency of the method is demonstrated through selected numerical examples that constitute classical problems of great interest to the structural health monitoring community.
NASA Astrophysics Data System (ADS)
Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun
2014-11-01
This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.
Parallel block schemes for large scale least squares computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golub, G.H.; Plemmons, R.J.; Sameh, A.
1986-04-01
Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less
Scaling and criticality in a stochastic multi-agent model of a financial market
NASA Astrophysics Data System (ADS)
Lux, Thomas; Marchesi, Michele
1999-02-01
Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.
ERIC Educational Resources Information Center
Chin, Cheng; Yue, Keng
2011-01-01
Difficulties in teaching a multi-disciplinary subject such as the mechatronics system design module in Departments of Mechatronics Engineering at Temasek Polytechnic arise from the gap in experience and skill among staff and students who have different backgrounds in mechanical, computer and electrical engineering within the Mechatronics…
Intercell scheduling: A negotiation approach using multi-agent coalitions
NASA Astrophysics Data System (ADS)
Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde
2016-10-01
Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
Cotter, C. J.
2017-01-01
In Holm (Holm 2015 Proc. R. Soc. A 471, 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow. PMID:28989316
"Generality of mis-fit"? The real-life difficulty of matching scales in an interconnected world.
Keskitalo, E Carina H; Horstkotte, Tim; Kivinen, Sonja; Forbes, Bruce; Käyhkö, Jukka
2016-10-01
A clear understanding of processes at multiple scales and levels is of special significance when conceiving strategies for human-environment interactions. However, understanding and application of the scale concept often differ between administrative-political and ecological disciplines. These mirror major differences in potential solutions whether and how scales can, at all, be made congruent. As a result, opportunities of seeking "goodness-of-fit" between different concepts of governance should perhaps be reconsidered in the light of a potential "generality of mis-fit." This article reviews the interdisciplinary considerations inherent in the concept of scale in its ecological, as well as administrative-political, significance and argues that issues of how to manage "mis-fit" should be awarded more emphasis in social-ecological research and management practices. These considerations are exemplified by the case of reindeer husbandry in Fennoscandia. Whilst an indigenous small-scale practice, reindeer husbandry involves multi-level ecological and administrative-political complexities-complexities that we argue may arise in any multi-level system.
Multiplex congruence network of natural numbers.
Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua
2016-03-31
Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.
Multiplex congruence network of natural numbers
NASA Astrophysics Data System (ADS)
Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua
2016-03-01
Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.
2D deblending using the multi-scale shaping scheme
NASA Astrophysics Data System (ADS)
Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan
2018-01-01
Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.
ERIC Educational Resources Information Center
Wickerd, Garry; Hulac, David
2017-01-01
Accurate and rapid identification of students displaying behavioral problems requires instrumentation that is user friendly and reliable. The purpose of the study was to evaluate a multi-item direct behavior rating scale called the Direct Behavior Rating-Multiple Item Scale (DBR-MIS) for disruptive behavior to determine the number of…
Application of fuzzy theories to formulation of multi-objective design problems. [for helicopters
NASA Technical Reports Server (NTRS)
Dhingra, A. K.; Rao, S. S.; Miura, H.
1988-01-01
Much of the decision making in real world takes place in an environment in which the goals, the constraints, and the consequences of possible actions are not known precisely. In order to deal with imprecision quantitatively, the tools of fuzzy set theory can by used. This paper demonstrates the effectiveness of fuzzy theories in the formulation and solution of two types of helicopter design problems involving multiple objectives. The first problem deals with the determination of optimal flight parameters to accomplish a specified mission in the presence of three competing objectives. The second problem addresses the optimal design of the main rotor of a helicopter involving eight objective functions. A method of solving these multi-objective problems using nonlinear programming techniques is presented. Results obtained using fuzzy formulation are compared with those obtained using crisp optimization techniques. The outlined procedures are expected to be useful in situations where doubt arises about the exactness of permissible values, degree of credibility, and correctness of statements and judgements.
Manifold regularized matrix completion for multi-label learning with ADMM.
Liu, Bin; Li, Yingming; Xu, Zenglin
2018-05-01
Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wavelets in electronic structure calculations
NASA Astrophysics Data System (ADS)
Modisette, Jason Perry
1997-09-01
Ab initio calculations of the electronic structure of bulk materials and large clusters are not possible on today's computers using current techniques. The storage and diagonalization of the Hamiltonian matrix are the limiting factors in both memory and execution time. The scaling of both quantities with problem size can be reduced by using approximate diagonalization or direct minimization of the total energy with respect to the density matrix in conjunction with a localized basis. Wavelet basis members are much more localized than conventional bases such as Gaussians or numerical atomic orbitals. This localization leads to sparse matrices of the operators that arise in SCF multi-electron calculations. We have investigated the construction of the one-electron Hamiltonian, and also the effective one- electron Hamiltonians that appear in density-functional and Hartree-Fock theories. We develop efficient methods for the generation of the kinetic energy and potential matrices, the Hartree and exchange potentials, and the local exchange-correlation potential of the LDA. Test calculations are performed on one-electron problems with a variety of potentials in one and three dimensions.
Integrating complexity into data-driven multi-hazard supply chain network strategies
Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.
2013-01-01
Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.
Solving LP Relaxations of Large-Scale Precedence Constrained Problems
NASA Astrophysics Data System (ADS)
Bienstock, Daniel; Zuckerberg, Mark
We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.
ERIC Educational Resources Information Center
Ding, Weili; Lehrer, Steven F.
2009-01-01
This paper introduces an empirical strategy to estimate dynamic treatment effects in randomized trials that provide treatment in multiple stages and in which various noncompliance problems arise such as attrition and selective transitions between treatment and control groups. Our approach is applied to the highly influential four year randomized…
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Sparse Measurement Systems: Applications, Analysis, Algorithms and Design
ERIC Educational Resources Information Center
Narayanaswamy, Balakrishnan
2011-01-01
This thesis deals with "large-scale" detection problems that arise in many real world applications such as sensor networks, mapping with mobile robots and group testing for biological screening and drug discovery. These are problems where the values of a large number of inputs need to be inferred from noisy observations and where the…
Medical Student and Junior Doctors' Tolerance of Ambiguity: Development of a New Scale
ERIC Educational Resources Information Center
Hancock, Jason; Roberts, Martin; Monrouxe, Lynn; Mattick, Karen
2015-01-01
The practice of medicine involves inherent ambiguity, arising from limitations of knowledge, diagnostic problems, complexities of treatment and outcome and unpredictability of patient response. Research into doctors' tolerance of ambiguity is hampered by poor conceptual clarity and inadequate measurement scales. We aimed to create and pilot a…
Connell, N A; Goddard, A R; Philp, I; Bray, J
1998-05-01
We describe the processes involved in the development of an information system which can assess how care given by a number of agencies could be monitored by those agencies. In particular, it addresses the problem of sharing information as the boundaries of each agency are crossed. It focuses on the care of one specific patient group--the rehabilitation of elderly patients in the community, which provided an ideal multi-agency setting. It also describes: how a stakeholder participative approach to information system development was undertaken, based in part on the Soft Systems Methodology (SSM) approach (Checkland, 1981, 1990); some of the difficulties encountered in using such an approach; and the ways in which these were addressed. The paper goes on to describe an assessment tool called SCARS (the Southampton Community Ability Rating Scale). It concludes by reflecting on the management lessons arising from this project. It also observes, inter alia, how stakeholders have a strong preference for simpler, non-IT based systems, and comments on the difficulties encountered by stakeholders in attempting to reconcile their perceptions of the needs of their discipline or specialty with a more patient-centred approach of an integrated system.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
The Relationship Between Non-Symbolic Multiplication and Division in Childhood
McCrink, Koleen; Shafto, Patrick; Barth, Hilary
2016-01-01
Children without formal education in addition and subtraction are able to perform multi-step operations over an approximate number of objects. Further, their performance improves when solving approximate (but not exact) addition and subtraction problems that allow for inversion as a shortcut (e.g., a + b − b = a). The current study examines children’s ability to perform multi-step operations, and the potential for an inversion benefit, for the operations of approximate, non-symbolic multiplication and division. Children were trained to compute a multiplication and division scaling factor (*2 or /2, *4 or /4), and then tested on problems that combined two of these factors in a way that either allowed for an inversion shortcut (e.g., 8 * 4 / 4) or did not (e.g., 8 * 4 / 2). Children’s performance was significantly better than chance for all scaling factors during training, and they successfully computed the outcomes of the multi-step testing problems. They did not exhibit a performance benefit for problems with the a * b / b structure, suggesting they did not draw upon inversion reasoning as a logical shortcut to help them solve the multi-step test problems. PMID:26880261
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungjoo; Parks, Michael L.; Perego, Mauro
2016-11-09
ISPH code is developed to solve multi-physics meso-scale flow problems using implicit SPH method. In particular, the code can provides solutions for incompressible, multi phase flow and electro-kinetic flows.
ASSESSING ECOLOGICAL RISKS AT LARGE SPATIAL SCALES
The history of environmental management and regulation in the United States has been one of initial focus on localized, end-of-the-pipe problems to increasing attention to multi-scalar, multi-stressor, and multi- resource issues. Concomitant with this reorientation is the need fo...
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Dynamical scales for multi-TeV top-pair production at the LHC
NASA Astrophysics Data System (ADS)
Czakon, Michał; Heymes, David; Mitov, Alexander
2017-04-01
We calculate all major differential distributions with stable top-quarks at the LHC. The calculation covers the multi-TeV range that will be explored during LHC Run II and beyond. Our results are in the form of high-quality binned distributions. We offer predictions based on three different parton distribution function (pdf) sets. In the near future we will make our results available also in the more flexible fastNLO format that allows fast re-computation with any other pdf set. In order to be able to extend our calculation into the multi-TeV range we have had to derive a set of dynamic scales. Such scales are selected based on the principle of fastest perturbative convergence applied to the differential and inclusive cross-section. Many observations from our study are likely to be applicable and useful to other precision processes at the LHC. With scale uncertainty now under good control, pdfs arise as the leading source of uncertainty for TeV top production. Based on our findings, true precision in the boosted regime will likely only be possible after new and improved pdf sets appear. We expect that LHC top-quark data will play an important role in this process.
EPA RESEARCH HIGHLIGHTS -- MODELS-3/CMAQ OFFERS COMPREHENSIVE APPROACH TO AIR QUALITY MODELING
Regional and global coordinated efforts are needed to address air quality problems that are growing in complexity and scope. Models-3 CMAQ contains a community multi-scale air quality modeling system for simulating urban to regional scale pollution problems relating to troposphe...
NASA Astrophysics Data System (ADS)
Weiss, Chester J.
2013-08-01
An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.
Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization
NASA Astrophysics Data System (ADS)
Teixeira, J.
2015-12-01
Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.
Load Distribution Factors for Composite Multicell Box Girder Bridges
NASA Astrophysics Data System (ADS)
Tiwari, Sanjay; Bhargava, Pradeep
2017-12-01
Cellular steel section composite with a concrete deck is one of the most suitable superstructures in resisting torsional and warping effects induced by highway loading. This type of structure has inherently created new design problems for engineers in estimating its load distribution when subjected to moving vehicles. Indian Codes of Practice does not provide any specific guidelines for the design of straight composite concrete deck-steel multi-cell bridges. To meet the practical requirements arising during the design process, a simple design method is needed for straight composite multi-cell bridges in the form of load distribution factors for moment and shear. This work presents load distribution characteristics of straight composite multi-cell box girder bridges under IRC trains of loads.
Multi-Item Direct Behavior Ratings: Dependability of Two Levels of Assessment Specificity
ERIC Educational Resources Information Center
Volpe, Robert J.; Briesch, Amy M.
2015-01-01
Direct Behavior Rating-Multi-Item Scales (DBR-MIS) have been developed as formative measures of behavioral assessment for use in school-based problem-solving models. Initial research has examined the dependability of composite scores generated by summing all items comprising the scales. However, it has been argued that DBR-MIS may offer assessment…
Optimal rail container shipment planning problem in multimodal transportation
NASA Astrophysics Data System (ADS)
Cao, Chengxuan; Gao, Ziyou; Li, Keping
2012-09-01
The optimal rail container shipment planning problem in multimodal transportation is studied in this article. The characteristics of the multi-period planning problem is presented and the problem is formulated as a large-scale 0-1 integer programming model, which maximizes the total profit generated by all freight bookings accepted in a multi-period planning horizon subject to the limited capacities. Two heuristic algorithms are proposed to obtain an approximate optimal solution of the problem. Finally, numerical experiments are conducted to demonstrate the proposed formulation and heuristic algorithms.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
SMALL-SCALE ANISOTROPIES OF COSMIC RAYS FROM RELATIVE DIFFUSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlers, Markus; Mertsch, Philipp
2015-12-10
The arrival directions of multi-TeV cosmic rays show significant anisotropies at small angular scales. It has been argued that this small-scale structure can naturally arise from cosmic ray scattering in local turbulent magnetic fields that distort a global dipole anisotropy set by diffusion. We study this effect in terms of the power spectrum of cosmic ray arrival directions and show that the strength of small-scale anisotropies is related to properties of relative diffusion. We provide a formalism for how these power spectra can be inferred from simulations and motivate a simple analytic extension of the ensemble-averaged diffusion equation that canmore » account for the effect.« less
Experimental Mathematics and Mathematical Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.; Borwein, Jonathan M.; Broadhurst, David
2009-06-26
One of the most effective techniques of experimental mathematics is to compute mathematical entities such as integrals, series or limits to high precision, then attempt to recognize the resulting numerical values. Recently these techniques have been applied with great success to problems in mathematical physics. Notable among these applications are the identification of some key multi-dimensional integrals that arise in Ising theory, quantum field theory and in magnetic spin theory.
LORENE: Spectral methods differential equations solver
NASA Astrophysics Data System (ADS)
Gourgoulhon, Eric; Grandclément, Philippe; Marck, Jean-Alain; Novak, Jérôme; Taniguchi, Keisuke
2016-08-01
LORENE (Langage Objet pour la RElativité NumériquE) solves various problems arising in numerical relativity, and more generally in computational astrophysics. It is a set of C++ classes and provides tools to solve partial differential equations by means of multi-domain spectral methods. LORENE classes implement basic structures such as arrays and matrices, but also abstract mathematical objects, such as tensors, and astrophysical objects, such as stars and black holes.
Robert S. Arkle; David S. Pilliod; Steven E. Hanser; Matthew L. Brooks; Jeanne C. Chambers; James B. Grace; Kevin C. Knutson; David A. Pyke; Justin L. Welty; Troy A. Wirth
2014-01-01
A recurrent challenge in the conservation of wide-ranging, imperiled species is understanding which habitats to protect and whether we are capable of restoring degraded landscapes. For Greater Sage-grouse (Centrocercus urophasianus), a species of conservation concern in the western United States, we approached this problem by developing multi-scale empirical models of...
Applied mathematical problems in modern electromagnetics
NASA Astrophysics Data System (ADS)
Kriegsman, Gregory
1994-05-01
We have primarily investigated two classes of electromagnetic problems. The first contains the quantitative description of microwave heating of dispersive and conductive materials. Such problems arise, for example, when biological tissue are exposed, accidentally or purposefully, to microwave radiation. Other instances occur in ceramic processing, such as sintering and microwave assisted chemical vapor infiltration and other industrial drying processes, such as the curing of paints and concrete. The second class characterizes the scattering of microwaves by complex targets which possess two or more disparate length and/or time scales. Spatially complex scatterers arise in a variety of applications, such as large gratings and slowly changing guiding structures. The former are useful in developing microstrip energy couplers while the later can be used to model anatomical subsystems (e.g., the open guiding structure composed of two legs and the adjoining lower torso). Temporally complex targets occur in applications involving dispersive media whose relaxation times differ by orders of magnitude from thermal and/or electromagnetic time scales. For both cases the mathematical description of the problems gives rise to complicated ill-conditioned boundary value problems, whose accurate solutions require a blend of both asymptotic techniques, such as multiscale methods and matched asymptotic expansions, and numerical methods incorporating radiation boundary conditions, such as finite differences and finite elements.
Network representations of immune system complexity
Subramanian, Naeha; Torabi-Parizi, Parizad; Gottschalk, Rachel A.; Germain, Ronald N.; Dutta, Bhaskar
2015-01-01
The mammalian immune system is a dynamic multi-scale system composed of a hierarchically organized set of molecular, cellular and organismal networks that act in concert to promote effective host defense. These networks range from those involving gene regulatory and protein-protein interactions underlying intracellular signaling pathways and single cell responses to increasingly complex networks of in vivo cellular interaction, positioning and migration that determine the overall immune response of an organism. Immunity is thus not the product of simple signaling events but rather non-linear behaviors arising from dynamic, feedback-regulated interactions among many components. One of the major goals of systems immunology is to quantitatively measure these complex multi-scale spatial and temporal interactions, permitting development of computational models that can be used to predict responses to perturbation. Recent technological advances permit collection of comprehensive datasets at multiple molecular and cellular levels while advances in network biology support representation of the relationships of components at each level as physical or functional interaction networks. The latter facilitate effective visualization of patterns and recognition of emergent properties arising from the many interactions of genes, molecules, and cells of the immune system. We illustrate the power of integrating ‘omics’ and network modeling approaches for unbiased reconstruction of signaling and transcriptional networks with a focus on applications involving the innate immune system. We further discuss future possibilities for reconstruction of increasingly complex cellular and organism-level networks and development of sophisticated computational tools for prediction of emergent immune behavior arising from the concerted action of these networks. PMID:25625853
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
NASA Astrophysics Data System (ADS)
Iny, David
2007-09-01
This paper addresses the out-of-sequence measurement (OOSM) problem associated with multiple platform tracking systems. The problem arises due to different transmission delays in communication of detection reports across platforms. Much of the literature focuses on the improvement to the state estimate by incorporating the OOSM. As the time lag increases, there is diminishing improvement to the state estimate. However, this paper shows that optimal processing of OOSMs may still be beneficial by improving data association as part of a multi-target tracker. This paper derives exact multi-lag algorithms with the property that the standard log likelihood track scoring is independent of the order in which the measurements are processed. The orthogonality principle is applied to generalize the method of Bar- Shalom in deriving the exact A1 algorithm for 1-lag estimation. Theory is also developed for optimal filtering of time averaged measurements and measurements correlated through periodic updates of a target aim-point. An alternative derivation of the multi-lag algorithms is also achieved using an efficient variant of the augmented state Kalman filter (AS-KF). This results in practical and reasonably efficient multi-lag algorithms. Results are compared to a well known ad hoc algorithm for incorporating OOSMs. Finally, the paper presents some simulated multi-target multi-static scenarios where there is a benefit to processing the data out of sequence in order to improve pruning efficiency.
Fast Decentralized Averaging via Multi-scale Gossip
NASA Astrophysics Data System (ADS)
Tsianos, Konstantinos I.; Rabbat, Michael G.
We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.
NASA Technical Reports Server (NTRS)
Plesniak, Michael W.; Johnston, J. P.
1989-01-01
The construction and development of the multi-component traversing system and associated control hardware and software are presented. A hydrogen bubble/laser sheet flow visualization technique was developed to visually study the characteristics of the mixing layers. With this technique large-scale rollers arising from the Taylor-Gortler instability and its interaction with the primary Kelvin-Helmholtz structures can be studied.
Zhang, Rui
2017-01-01
The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars. PMID:29295603
Zhang, Rui
2017-12-25
The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars.
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
A multi-scale problem arising in a model of avian flu virus in a seabird colony
NASA Astrophysics Data System (ADS)
Clancy, C. F.; O'Callaghan, M. J. A.; Kelly, T. C.
2006-12-01
Understanding the dynamics of epidemics of novel pathogens such as the H5N1 strain of avian influenza is of crucial importance to public and veterinary health as well as wildlife ecology. We model the effect of a new virus on a seabird colony, where no pre-existing Herd Immunity exists. The seabirds in question are so-called K-strategists, i.e. they have a relatively long life expectancy and very low reproductive output. They live in isolated colonies which typically contain tens of thousands of birds. These densely populated colonies, with so many birds competing for nesting space, would seem to provide perfect conditions for the entry and spread of an infection. Yet there are relatively few reported cases of epidemics among these seabirds. We develop a SEIR model which incorporates some of the unusual features of seabird population biology and examine the effects of introducing a pathogen into the colony.
Christoph Kueffer; Curtis Daehler; Hansjörg Dietz; Keith McDougall; Catherine Parks; Aníbal Pauchard; Lisa Rew
2014-01-01
Many modern environmental problems span vastly different spatial scales, from the management of local ecosystems to understanding globally interconnected processes, and addressing them through international policy. MIREN tackles one such âglocalâ (global/local) environmental problem â plant invasions in mountains â through a transdisciplinary, multi-scale learning...
Analytical Cost Metrics : Days of Future Past
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov
As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less
Cross-scale interactions: Quantifying multi-scaled cause–effect relationships in macrosystems
Soranno, Patricia A.; Cheruvelil, Kendra S.; Bissell, Edward G.; Bremigan, Mary T.; Downing, John A.; Fergus, Carol E.; Filstrup, Christopher T.; Henry, Emily N.; Lottig, Noah R.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.
2014-01-01
Ecologists are increasingly discovering that ecological processes are made up of components that are multi-scaled in space and time. Some of the most complex of these processes are cross-scale interactions (CSIs), which occur when components interact across scales. When undetected, such interactions may cause errors in extrapolation from one region to another. CSIs, particularly those that include a regional scaled component, have not been systematically investigated or even reported because of the challenges of acquiring data at sufficiently broad spatial extents. We present an approach for quantifying CSIs and apply it to a case study investigating one such interaction, between local and regional scaled land-use drivers of lake phosphorus. Ultimately, our approach for investigating CSIs can serve as a basis for efforts to understand a wide variety of multi-scaled problems such as climate change, land-use/land-cover change, and invasive species.
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...
2017-02-16
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less
Multi-Agent Inference in Social Networks: A Finite Population Learning Approach.
Fan, Jianqing; Tong, Xin; Zeng, Yao
When people in a society want to make inference about some parameter, each person may want to use data collected by other people. Information (data) exchange in social networks is usually costly, so to make reliable statistical decisions, people need to trade off the benefits and costs of information acquisition. Conflicts of interests and coordination problems will arise in the process. Classical statistics does not consider people's incentives and interactions in the data collection process. To address this imperfection, this work explores multi-agent Bayesian inference problems with a game theoretic social network model. Motivated by our interest in aggregate inference at the societal level, we propose a new concept, finite population learning , to address whether with high probability, a large fraction of people in a given finite population network can make "good" inference. Serving as a foundation, this concept enables us to study the long run trend of aggregate inference quality as population grows.
Scale in Education Research: Towards a Multi-Scale Methodology
ERIC Educational Resources Information Center
Noyes, Andrew
2013-01-01
This article explores some theoretical and methodological problems concerned with scale in education research through a critique of a recent mixed-method project. The project was framed by scale metaphors drawn from the physical and earth sciences and I consider how recent thinking around scale, for example, in ecosystems and human geography might…
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Privacy-preserving search for chemical compound databases.
Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi
2015-01-01
Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.
Privacy-preserving search for chemical compound databases
2015-01-01
Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650
NASA Astrophysics Data System (ADS)
Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei
2018-06-01
Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.
Simulating and mapping spatial complexity using multi-scale techniques
De Cola, L.
1994-01-01
A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author
1981-10-07
new instrument (cf. Fig. 1) is simply a four - quadrant ring-diode multi- 5 plier (Fig. 2). The reference frequency (RF) and local oscillator (LO) inputs...movement, and scan speed of the corner-cube. Other Components. A rotating-sector chopper modulates the laser pulse train at a frequency of approximately 50...the cross-correlation experiment. In this application, the detection bandpass is simply displaced from DC to the chopper frequency; problems arising
Study of Varying Boundary Layer Height on Turret Flow Structures
2011-06-01
fluid dynamics. The difficulties of the problem arise in modeling several complex flow features including separation, reattachment, three-dimensional...impossible. In this case, the approach is to create a model to calculate the properties of interest. The main issue with resolving turbulent flows...operation and their effect is modeled through subgrid scale models . As a result, the the most important turbulent scales are resolved and the
Time and length scales within a fire and implications for numerical simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
TIESZEN,SHELDON R.
2000-02-02
A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less
New design for interfacing computers to the Octopus network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sloan, L.J.
1977-03-14
The Lawrence Livermore Laboratory has several large-scale computers which are connected to the Octopus network. Several difficulties arise in providing adequate resources along with reliable performance. To alleviate some of these problems a new method of bringing large computers into the Octopus environment is proposed.
Some current themes in physical hydrology of the land-atmosphere interface
Milly, P.C.D.
1991-01-01
Certain themes arise repeatedly in current literature dealing with the physical hydrology of the interface between the atmosphere and the continents. Papers contributed to the 1991 International Association of Hydrological Sciences Symposium on Hydrological Interactions between Atmosphere, Soil and Vegetation echo these themes, which are discussed in this paper. The land-atmosphere interface is the region where atmosphere, soil, and vegetation have mutual physical contact, and a description of exchanges of matter or energy among these domains must often consider the physical properties and states of the entire system. A difficult family of problems is associated with the reconciliation of the wide range of spatial scales that arise in the course of observational, theoretical, and modeling activities. These scales are determined by some of the physical elements of the interface, by patterns of natural variability of the physical composition of the interface, by the dynamics of the processes at the interface, and by methods of measurement and computation. Global environmental problems are seen by many hydrologists as a major driving force for development of the science. The challenge for hydrologists will be to respond to this force as scientists rather than problem-solvers.
NASA Astrophysics Data System (ADS)
Yeo, I. Y.; Lang, M.; Lee, S.; Huang, C.; Jin, H.; McCarty, G.; Sadeghi, A.
2017-12-01
The wetland ecosystem plays crucial roles in improving hydrological function and ecological integrity for the downstream water and the surrounding landscape. However, changing behaviours and functioning of wetland ecosystems are poorly understood and extremely difficult to characterize. Improved understanding on hydrological behaviours of wetlands, considering their interaction with surrounding landscapes and impacts on downstream waters, is an essential first step toward closing the knowledge gap. We present an integrated wetland-catchment modelling study that capitalizes on recently developed inundation maps and other geospatial data. The aim of the data-model integration is to improve spatial prediction of wetland inundation and evaluate cumulative hydrological benefits at the catchment scale. In this paper, we highlight problems arising from data preparation, parameterization, and process representation in simulating wetlands within a distributed catchment model, and report the recent progress on mapping of wetland dynamics (i.e., inundation) using multiple remotely sensed data. We demonstrate the value of spatially explicit inundation information to develop site-specific wetland parameters and to evaluate model prediction at multi-spatial and temporal scales. This spatial data-model integrated framework is tested using Soil and Water Assessment Tool (SWAT) with improved wetland extension, and applied for an agricultural watershed in the Mid-Atlantic Coastal Plain, USA. This study illustrates necessity of spatially distributed information and a data integrated modelling approach to predict inundation of wetlands and hydrologic function at the local landscape scale, where monitoring and conservation decision making take place.
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
Observations from Space in a Global Ecology Programme
ERIC Educational Resources Information Center
Kondratyev, Kirill Ya
1974-01-01
In order to resolve problems arising from the possibility of ecological crisis, we need more and better information about our environment. The condition of nature on a planetary scale can be monitored efficiently only with the aid of satellites, human observers in earth orbit, and computer analysis of data. (Author/GS)
Multiscale functions, scale dynamics, and applications to partial differential equations
NASA Astrophysics Data System (ADS)
Cresson, Jacky; Pierret, Frédéric
2016-05-01
Modeling phenomena from experimental data always begins with a choice of hypothesis on the observed dynamics such as determinism, randomness, and differentiability. Depending on these choices, different behaviors can be observed. The natural question associated to the modeling problem is the following: "With a finite set of data concerning a phenomenon, can we recover its underlying nature? From this problem, we introduce in this paper the definition of multi-scale functions, scale calculus, and scale dynamics based on the time scale calculus [see Bohner, M. and Peterson, A., Dynamic Equations on Time Scales: An Introduction with Applications (Springer Science & Business Media, 2001)] which is used to introduce the notion of scale equations. These definitions will be illustrated on the multi-scale Okamoto's functions. Scale equations are analysed using scale regimes and the notion of asymptotic model for a scale equation under a particular scale regime. The introduced formalism explains why a single scale equation can produce distinct continuous models even if the equation is scale invariant. Typical examples of such equations are given by the scale Euler-Lagrange equation. We illustrate our results using the scale Newton's equation which gives rise to a non-linear diffusion equation or a non-linear Schrödinger equation as asymptotic continuous models depending on the particular fractional scale regime which is considered.
Multi-scale image segmentation and numerical modeling in carbonate rocks
NASA Astrophysics Data System (ADS)
Alves, G. C.; Vanorio, T.
2016-12-01
Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.
Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications
2016-10-17
finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters
Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun
2017-01-01
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved. PMID:28241475
Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters.
Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun
2017-02-23
Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved.
Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.
Green, Sara; Batterman, Robert
2017-02-01
A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association
Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You
2017-01-01
This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets’ state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems. PMID:29113085
Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association.
Liu, Yu; Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You
2017-11-05
This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets' state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems.
Multi-Party, Whole-Body Interactions in Mathematical Activity
ERIC Educational Resources Information Center
Ma, Jasmine Y.
2017-01-01
This study interrogates the contributions of multi-party, whole-body interactions to students' collaboration and negotiation of mathematics ideas in a task setting called walking scale geometry, where bodies in interaction became complex resources for students' emerging goals in problem solving. Whole bodies took up overlapping roles representing…
The limitations of staggered grid finite differences in plasticity problems
NASA Astrophysics Data System (ADS)
Pranger, Casper; Herrendörfer, Robert; Le Pourhiet, Laetitia
2017-04-01
Most crustal-scale applications operate at grid sizes much larger than those at which plasticity occurs in nature. As a consequence, plastic shear bands often localize to the scale of one grid cell, and numerical ploys — like introducing an artificial length scale — are needed to counter this. If for whatever reasons (good or bad) this is not done, we find that problems may arise due to the fact that in the staggered grid finite difference discretization, unknowns like components of the stress tensor and velocity vector are located in physically different positions. This incurs frequent interpolation, reducing the accuracy of the discretization. For purely stress-dependent plasticity problems the adverse effects might be contained because the magnitude of the stress discontinuity across a plastic shear band is limited. However, we find that when rate-dependence of friction is added in the mix, things become ugly really fast and the already hard-to-solve and highly nonlinear problem of plasticity incurs an extra penalty.
Multi Dimensional Honey Bee Foraging Algorithm Based on Optimal Energy Consumption
NASA Astrophysics Data System (ADS)
Saritha, R.; Vinod Chandra, S. S.
2017-10-01
In this paper a new nature inspired algorithm is proposed based on natural foraging behavior of multi-dimensional honey bee colonies. This method handles issues that arise when food is shared from multiple sources by multiple swarms at multiple destinations. The self organizing nature of natural honey bee swarms in multiple colonies is based on the principle of energy consumption. Swarms of multiple colonies select a food source to optimally fulfill the requirements of its colonies. This is based on the energy requirement for transporting food between a source and destination. Minimum use of energy leads to maximizing profit in each colony. The mathematical model proposed here is based on this principle. This has been successfully evaluated by applying it on multi-objective transportation problem for optimizing cost and time. The algorithm optimizes the needs at each destination in linear time.
ERIC Educational Resources Information Center
Rojahn, Johannes; Schroeder, Stephen R.; Mayo-Ortega, Liliana; Oyama-Ganiko, Rosao; LeBlanc, Judith; Marquis, Janet; Berke, Elizabeth
2013-01-01
Reliable and valid assessment of aberrant behaviors is essential in empirically verifying prevention and intervention for individuals with intellectual or developmental disabilities (IDD). Few instruments exist which assess behavior problems in infants. The current longitudinal study examined the performance of three behavior-rating scales for…
Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos
2009-05-01
instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors
Teaching Discrete and Programmable Logic Design Techniques Using a Single Laboratory Board
ERIC Educational Resources Information Center
Debiec, P.; Byczuk, M.
2011-01-01
Programmable logic devices (PLDs) are used at many universities in introductory digital logic laboratories, where kits containing a single high-capacity PLD replace "standard" sets containing breadboards, wires, and small- or medium-scale integration (SSI/MSI) chips. From the pedagogical point of view, two problems arise in these…
The Opening of Higher Education
ERIC Educational Resources Information Center
Matkin, Gary W.
2012-01-01
In a 1974 report presented to the Organisation for Economic Co-operation and Development (OECD), Martin Trow laid out a framework for understanding large-scale, worldwide changes in higher education. Trow's essay also pointed to the problems that "arise out of the transition from one phase to another in a broad pattern of development of higher…
The Use of Kruskal-Newton Diagrams for Differential Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. Fishaleck and R.B. White
2008-02-19
The method of Kruskal-Newton diagrams for the solution of differential equations with boundary layers is shown to provide rapid intuitive understanding of layer scaling and can result in the conceptual simplification of some problems. The method is illustrated using equations arising in the theory of pattern formation and in plasma physics.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
Using CellML with OpenCMISS to Simulate Multi-Scale Physiology
Nickerson, David P.; Ladd, David; Hussan, Jagir R.; Safaei, Soroush; Suresh, Vinod; Hunter, Peter J.; Bradley, Christopher P.
2014-01-01
OpenCMISS is an open-source modeling environment aimed, in particular, at the solution of bioengineering problems. OpenCMISS consists of two main parts: a computational library (OpenCMISS-Iron) and a field manipulation and visualization library (OpenCMISS-Zinc). OpenCMISS is designed for the solution of coupled multi-scale, multi-physics problems in a general-purpose parallel environment. CellML is an XML format designed to encode biophysically based systems of ordinary differential equations and both linear and non-linear algebraic equations. A primary design goal of CellML is to allow mathematical models to be encoded in a modular and reusable format to aid reproducibility and interoperability of modeling studies. In OpenCMISS, we make use of CellML models to enable users to configure various aspects of their multi-scale physiological models. This avoids the need for users to be familiar with the OpenCMISS internal code in order to perform customized computational experiments. Examples of this are: cellular electrophysiology models embedded in tissue electrical propagation models; material constitutive relationships for mechanical growth and deformation simulations; time-varying boundary conditions for various problem domains; and fluid constitutive relationships and lumped-parameter models. In this paper, we provide implementation details describing how CellML models are integrated into multi-scale physiological models in OpenCMISS. The external interface OpenCMISS presents to users is also described, including specific examples exemplifying the extensibility and usability these tools provide the physiological modeling and simulation community. We conclude with some thoughts on future extension of OpenCMISS to make use of other community developed information standards, such as FieldML, SED-ML, and BioSignalML. Plans for the integration of accelerator code (graphical processing unit and field programmable gate array) generated from CellML models is also discussed. PMID:25601911
On a Game of Large-Scale Projects Competition
NASA Astrophysics Data System (ADS)
Nikonov, Oleg I.; Medvedeva, Marina A.
2009-09-01
The paper is devoted to game-theoretical control problems motivated by economic decision making situations arising in realization of large-scale projects, such as designing and putting into operations the new gas or oil pipelines. A non-cooperative two player game is considered with payoff functions of special type for which standard existence theorems and algorithms for searching Nash equilibrium solutions are not applicable. The paper is based on and develops the results obtained in [1]-[5].
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
García-Tornel Florensa, S; Calzada, E J; Eyberg, S M; Mas Alguacil, J C; Vilamala Serra, C; Baraza Mendoza, C; Villena Collado, H; González García, M; Calvo Hernández, M; Trinxant Doménech, A
1998-05-01
Taking into account the high prevalence of behavioral problems in the pediatric outpatient clinic, a need for a useful and easy to administer tool for the evaluation of this problem arises. The psychometric characteristics of the Spanish version of the Eyberg Behavioral Child Inventory (EBCI), [in Spanish Inventario de Eyberg para el Comportamiento de Niño (IECN)], a 36-item questionnaire were established. The ECBI inventory/questionnaire was translated into Spanish. The basis of the ECBI is the evaluation of the child's behavior through the parents' answers to the questionnaire. Healthy children between 2 and 12 years of age were included and were taken from pediatric outpatient clinics from urban and suburban areas of Barcelona and from our hospital's own ambulatory clinic. The final sample included 518 subjects. The mean score on the intensity scale was 96.8 and on the problem scale 3.9. Internal consistency (Cronbach's alpha) was 0.73 and the test-retest had an r of 0.89 (p < 0.001) for the intensity scale and r = 0.93 (p < 0.001) for the problem scale. Interrater reliability for the intensity scale was r = 0.58 (p < 0.001) and r = 0.32 (p < 0.001) for the problem scale. Concurrent validity between both scales was r = 0.343 (p < 0.001). The IECN is a useful and easy tool to apply in the pediatrician's office as a method for early detection of behavior problems.
Graphic overlays in high-precision teleoperation: Current and future work at JPL
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1989-01-01
In space teleoperation additional problems arise, including signal transmission time delays. These can greatly reduce operator performance. Recent advances in graphics open new possibilities for addressing these and other problems. Currently a multi-camera system with normal 3-D TV and video graphics capabilities is being developed. Trained and untrained operators will be tested for high precision performance using two force reflecting hand controllers and a voice recognition system to control two robot arms and up to 5 movable stereo or non-stereo TV cameras. A number of new techniques of integrating TV and video graphics displays to improve operator training and performance in teleoperation and supervised automation are evaluated.
NASA Astrophysics Data System (ADS)
Torres, V.; Quek, S.; Gaydecki, P.
2010-02-01
Aging and deterioration of the main functional parts in civil structures is one of the biggest problems that private and governmental institutions, dedicated to operate and maintain such structures, are facing now days. In the case of relatively old suspension bridges, problems emerge due to corrosion and break of wires in the main cables. Decisive information and a reliable monitoring and evaluation are factors of great relevance required to prevent significant or catastrophic damages caused to the structure, and more importantly, to people. The main challenge for the NDE methods of inspection arises in dealing with the steel wrapping barrier of the suspension cable, which main function is to shield, shape and hold the bundles. The following work, presents a study of a multi-Magnetoresistive sensors system aiming to support the monitoring and evaluation of suspension cables at some of its stages. Modelling, signal acquisition, signal processing, experiments and the initial phases of implementation are presented and discussed widely.
Multi-Agent Inference in Social Networks: A Finite Population Learning Approach
Tong, Xin; Zeng, Yao
2016-01-01
When people in a society want to make inference about some parameter, each person may want to use data collected by other people. Information (data) exchange in social networks is usually costly, so to make reliable statistical decisions, people need to trade off the benefits and costs of information acquisition. Conflicts of interests and coordination problems will arise in the process. Classical statistics does not consider people’s incentives and interactions in the data collection process. To address this imperfection, this work explores multi-agent Bayesian inference problems with a game theoretic social network model. Motivated by our interest in aggregate inference at the societal level, we propose a new concept, finite population learning, to address whether with high probability, a large fraction of people in a given finite population network can make “good” inference. Serving as a foundation, this concept enables us to study the long run trend of aggregate inference quality as population grows. PMID:27076691
Zhang, Guoqing; Zhang, Xianku; Pang, Hongshuai
2015-09-01
This research is concerned with the problem of 4 degrees of freedom (DOF) ship manoeuvring identification modelling with the full-scale trial data. To avoid the multi-innovation matrix inversion in the conventional multi-innovation least squares (MILS) algorithm, a new transformed multi-innovation least squares (TMILS) algorithm is first developed by virtue of the coupling identification concept. And much effort is made to guarantee the uniformly ultimate convergence. Furthermore, the auto-constructed TMILS scheme is derived for the ship manoeuvring motion identification by combination with a statistic index. Comparing with the existing results, the proposed scheme has the significant computational advantage and is able to estimate the model structure. The illustrative examples demonstrate the effectiveness of the proposed algorithm, especially including the identification application with full-scale trial data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A novel method for a multi-level hierarchical composite with brick-and-mortar structure
Brandt, Kristina; Wolff, Michael F. H.; Salikov, Vitalij; Heinrich, Stefan; Schneider, Gerold A.
2013-01-01
The fascination for hierarchically structured hard tissues such as enamel or nacre arises from their unique structure-properties-relationship. During the last decades this numerously motivated the synthesis of composites, mimicking the brick-and-mortar structure of nacre. However, there is still a lack in synthetic engineering materials displaying a true hierarchical structure. Here, we present a novel multi-step processing route for anisotropic 2-level hierarchical composites by combining different coating techniques on different length scales. It comprises polymer-encapsulated ceramic particles as building blocks for the first level, followed by spouted bed spray granulation for a second level, and finally directional hot pressing to anisotropically consolidate the composite. The microstructure achieved reveals a brick-and-mortar hierarchical structure with distinct, however not yet optimized mechanical properties on each level. It opens up a completely new processing route for the synthesis of multi-level hierarchically structured composites, giving prospects to multi-functional structure-properties relationships. PMID:23900554
A novel method for a multi-level hierarchical composite with brick-and-mortar structure.
Brandt, Kristina; Wolff, Michael F H; Salikov, Vitalij; Heinrich, Stefan; Schneider, Gerold A
2013-01-01
The fascination for hierarchically structured hard tissues such as enamel or nacre arises from their unique structure-properties-relationship. During the last decades this numerously motivated the synthesis of composites, mimicking the brick-and-mortar structure of nacre. However, there is still a lack in synthetic engineering materials displaying a true hierarchical structure. Here, we present a novel multi-step processing route for anisotropic 2-level hierarchical composites by combining different coating techniques on different length scales. It comprises polymer-encapsulated ceramic particles as building blocks for the first level, followed by spouted bed spray granulation for a second level, and finally directional hot pressing to anisotropically consolidate the composite. The microstructure achieved reveals a brick-and-mortar hierarchical structure with distinct, however not yet optimized mechanical properties on each level. It opens up a completely new processing route for the synthesis of multi-level hierarchically structured composites, giving prospects to multi-functional structure-properties relationships.
A novel method for a multi-level hierarchical composite with brick-and-mortar structure
NASA Astrophysics Data System (ADS)
Brandt, Kristina; Wolff, Michael F. H.; Salikov, Vitalij; Heinrich, Stefan; Schneider, Gerold A.
2013-07-01
The fascination for hierarchically structured hard tissues such as enamel or nacre arises from their unique structure-properties-relationship. During the last decades this numerously motivated the synthesis of composites, mimicking the brick-and-mortar structure of nacre. However, there is still a lack in synthetic engineering materials displaying a true hierarchical structure. Here, we present a novel multi-step processing route for anisotropic 2-level hierarchical composites by combining different coating techniques on different length scales. It comprises polymer-encapsulated ceramic particles as building blocks for the first level, followed by spouted bed spray granulation for a second level, and finally directional hot pressing to anisotropically consolidate the composite. The microstructure achieved reveals a brick-and-mortar hierarchical structure with distinct, however not yet optimized mechanical properties on each level. It opens up a completely new processing route for the synthesis of multi-level hierarchically structured composites, giving prospects to multi-functional structure-properties relationships.
The importance of structural softening for the evolution and architecture of passive margins
Duretz, T.; Petri, B.; Mohn, G.; Schmalholz, S. M.; Schenker, F. L.; Müntener, O.
2016-01-01
Lithospheric extension can generate passive margins that bound oceans worldwide. Detailed geological and geophysical studies in present and fossil passive margins have highlighted the complexity of their architecture and their multi-stage deformation history. Previous modeling studies have shown the significant impact of coarse mechanical layering of the lithosphere (2 to 4 layer crust and mantle) on passive margin formation. We built upon these studies and design high-resolution (~100–300 m) thermo-mechanical numerical models that incorporate finer mechanical layering (kilometer scale) mimicking tectonically inherited heterogeneities. During lithospheric extension a variety of extensional structures arises naturally due to (1) structural softening caused by necking of mechanically strong layers and (2) the establishment of a network of weak layers across the deforming multi-layered lithosphere. We argue that structural softening in a multi-layered lithosphere is the main cause for the observed multi-stage evolution and architecture of magma-poor passive margins. PMID:27929057
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
The trend of the multi-scale temporal variability of precipitation in Colorado River Basin
NASA Astrophysics Data System (ADS)
Jiang, P.; Yu, Z.
2011-12-01
Hydrological problems like estimation of flood and drought frequencies under future climate change are not well addressed as a result of the disability of current climate models to provide reliable prediction (especially for precipitation) shorter than 1 month. In order to assess the possible impacts that multi-scale temporal distribution of precipitation may have on the hydrological processes in Colorado River Basin (CRB), a comparative analysis of multi-scale temporal variability of precipitation as well as the trend of extreme precipitation is conducted in four regions controlled by different climate systems. Multi-scale precipitation variability including within-storm patterns and intra-annual, inter-annual and decadal variabilities will be analyzed to explore the possible trends of storm durations, inter-storm periods, average storm precipitation intensities and extremes under both long-term natural climate variability and human-induced warming. Further more, we will examine the ability of current climate models to simulate the multi-scale temporal variability and extremes of precipitation. On the basis of these analyses, a statistical downscaling method will be developed to disaggregate the future precipitation scenarios which will provide a more reliable and finer temporal scale precipitation time series for hydrological modeling. Analysis results and downscaling results will be presented.
NASA Astrophysics Data System (ADS)
Metzger, E. P.; Curren, R. R.
2016-12-01
Effective engagement with the problems of sustainability begins with an understanding of the nature of the challenges. The entanglement of interacting human and Earth systems produces solution-resistant dilemmas that are often portrayed as wicked problems. As introduced by urban planners Rittel and Webber (1973), wicked problems are "dynamically complex, ill-structured, public problems" arising from complexity in both biophysical and socio-economic systems. The wicked problem construct is still in wide use across diverse contexts, disciplines, and sectors. Discourse about wicked problems as related to sustainability is often connected to discussion of complexity or complex systems. In preparation for life and work in an uncertain, dynamic and hyperconnected world, students need opportunities to investigate real problems that cross social, political and disciplinary divides. They need to grapple with diverse perspectives and values, and collaborate with others to devise potential solutions. Such problems are typically multi-casual and so intertangled with other problems that they cannot be resolved using the expertise and analytical tools of any single discipline, individual, or organization. We have developed a trio of illustrative case studies that focus on energy, water and food, because these resources are foundational, interacting, and causally connected in a variety of ways with climate destabilization. The three interrelated case studies progress in scale from the local and regional, to the national and international and include: 1) the 2010 Gulf of Mexico oil spill with examination of the multiple immediate and root causes of the disaster, its ecological, social, and economic impacts, and the increasing risk and declining energy return on investment associated with the relentless quest for fossil fuels; 2) development of Australia's innovative National Water Management System; and 3) changing patterns of food production and the intertwined challenge of managing transnational water resources in the rapidly growing Mekong Region of Southeast Asia. .
NASA Astrophysics Data System (ADS)
Kalanov, Temur Z.
2015-04-01
Analysis of the foundations of the theory of negative numbers is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. Statement of the problem is as follows. As is known, point O in the Cartesian coordinate system XOY determines the position of zero on the scale. The number ``zero'' belongs to both the scale of positive numbers and the scale of negative numbers. In this case, the following formallogical contradiction arises: the number 0 is both positive number and negative number; or, equivalently, the number 0 is neither positive number nor negative number, i.e. number 0 has no sign. Then the following question arises: Do negative numbers exist in science and practice? A detailed analysis of the problem shows that negative numbers do not exist because the foundations of the theory of negative numbers contrary to the formal-logical laws. It is proved that: (a) all numbers have no signs; (b) the concepts ``negative number'' and ``negative sign of number'' represent a formallogical error; (c) signs ``plus'' and ``minus'' are only symbols of mathematical operations. The logical errors determine the essence of the theory of negative numbers: the theory of negative number is a false theory.
Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy
NASA Astrophysics Data System (ADS)
Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping
2018-01-01
Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
Boundary Korn Inequality and Neumann Problems in Homogenization of Systems of Elasticity
NASA Astrophysics Data System (ADS)
Geng, Jun; Shen, Zhongwei; Song, Liang
2017-06-01
This paper is concerned with a family of elliptic systems of linear elasticity with rapidly oscillating periodic coefficients, arising in the theory of homogenization. We establish uniform optimal regularity estimates for solutions of Neumann problems in a bounded Lipschitz domain with L 2 boundary data. The proof relies on a boundary Korn inequality for solutions of systems of linear elasticity and uses a large-scale Rellich estimate obtained in Shen (Anal PDE, arXiv:1505.00694v2).
Realistic Modeling of Multi-Scale MHD Dynamics of the Solar Atmosphere
NASA Technical Reports Server (NTRS)
Kitiashvili, Irina; Mansour, Nagi N.; Wray, Alan; Couvidat, Sebastian; Yoon, Seokkwan; Kosovichev, Alexander
2014-01-01
Realistic 3D radiative MHD simulations open new perspectives for understanding the turbulent dynamics of the solar surface, its coupling to the atmosphere, and the physical mechanisms of generation and transport of non-thermal energy. Traditionally, plasma eruptions and wave phenomena in the solar atmosphere are modeled by prescribing artificial driving mechanisms using magnetic or gas pressure forces that might arise from magnetic field emergence or reconnection instabilities. In contrast, our 'ab initio' simulations provide a realistic description of solar dynamics naturally driven by solar energy flow. By simulating the upper convection zone and the solar atmosphere, we can investigate in detail the physical processes of turbulent magnetoconvection, generation and amplification of magnetic fields, excitation of MHD waves, and plasma eruptions. We present recent simulation results of the multi-scale dynamics of quiet-Sun regions, and energetic effects in the atmosphere and compare with observations. For the comparisons we calculate synthetic spectro-polarimetric data to model observational data of SDO, Hinode, and New Solar Telescope.
The Shock Dynamics of Heterogeneous YSO Jets: 3D Simulations Meet Multi-epoch Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, E. C.; Frank, A.; Hartigan, P.
High-resolution observations of young stellar object (YSO) jets show them to be composed of many small-scale knots or clumps. In this paper, we report results of 3D numerical simulations designed to study how such clumps interact and create morphologies and kinematic patterns seen in emission line observations. Our simulations focus on clump scale dynamics by imposing velocity differences between spherical, over-dense regions, which then lead to the formation of bow shocks as faster clumps overtake slower material. We show that much of the spatial structure apparent in emission line images of jets arises from the dynamics and interactions of thesemore » bow shocks. Our simulations show a variety of time-dependent features, including bright knots associated with Mach stems where the shocks intersect, a “frothy” emission structure that arises from the presence of the Nonlinear Thin Shell Instability along the surfaces of the bow shocks, and the merging and fragmentation of clumps. Our simulations use a new non-equilibrium cooling method to produce synthetic emission maps in H α and [S ii]. These are directly compared to multi-epoch Hubble Space Telescope observations of Herbig–Haro jets. We find excellent agreement between features seen in the simulations and the observations in terms of both proper motion and morphologies. Thus we conclude that YSO jets may be dominated by heterogeneous structures and that interactions between these structures and the shocks they produce can account for many details of YSO jet evolution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
A Minimum-Residual Finite Element Method for the Convection-Diffusion Equation
2013-05-01
4p . We note that these two choices of discretization for V are not mutually exclusive, and that novel choices for Vh are likely the key to yielding...the inside with the positive- definite operator A, which is precisely the discrete system that arises under the optimal test function framework of DPG...converts the fine-scale problem into a symmetric-positive definite one, allowing for a well-behaved subgrid model of fine scale behavior. We begin again
Achenbach, Thomas M; Ivanova, Masha Y; Rescorla, Leslie A
2017-11-01
Originating in the 1960s, the Achenbach System of Empirically Based Assessment (ASEBA) comprises a family of instruments for assessing problems and strengths for ages 1½-90+ years. To provide an overview of the ASEBA, related research, and future directions for empirically based assessment and taxonomy. Standardized, multi-informant ratings of transdiagnostic dimensions of behavioral, emotional, social, and thought problems are hierarchically scored on narrow-spectrum syndrome scales, broad-spectrum internalizing and externalizing scales, and a total problems (general psychopathology) scale. DSM-oriented and strengths scales are also scored. The instruments and scales have been iteratively developed from assessments of clinical and population samples of hundreds of thousands of individuals. Items, instruments, scales, and norms are tailored to different kinds of informants for ages 1½-5, 6-18, 18-59, and 60-90+ years. To take account of differences between informants' ratings, parallel instruments are completed by parents, teachers, youths, adult probands, and adult collaterals. Syndromes and Internalizing/Externalizing scales derived from factor analyses of each instrument capture variations in patterns of problems that reflect different informants' perspectives. Confirmatory factor analyses have supported the syndrome structures in dozens of societies. Software displays scale scores in relation to user-selected multicultural norms for the age and gender of the person being assessed, according to ratings by each type of informant. Multicultural norms are derived from population samples in 57 societies on every inhabited continent. Ongoing and future research includes multicultural assessment of elders; advancing transdiagnostic progress and outcomes assessment; and testing higher order structures of psychopathology. Copyright © 2017 Elsevier Inc. All rights reserved.
Building a Model of Support for Preschool Children with Speech and Language Disorders
ERIC Educational Resources Information Center
Robertson, Natalie; Ohi, Sarah
2016-01-01
Speech and language disorders impede young children's abilities to communicate and are often associated with a number of behavioural problems arising in the preschool classroom. This paper reports a small-scale study that investigated 23 Australian educators' and 7 Speech Pathologists' experiences in working with three to five year old children…
Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai
2016-12-01
This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.
Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.
New Results in {mathcal {N}}=2 N = 2 Theories from Non-perturbative String
NASA Astrophysics Data System (ADS)
Bonelli, Giulio; Grassi, Alba; Tanzini, Alessandro
2018-03-01
We describe the magnetic phase of SU(N) $\\mathcal{N}=2$ Super Yang-Mills theories in the self-dual Omega background in terms of a new class of multi-cut matrix models. These arise from a non-perturbative completion of topological strings in the dual four dimensional limit which engineers the gauge theory in the strongly coupled magnetic frame. The corresponding spectral determinants provide natural candidates for the tau functions of isomonodromy problems for flat spectral connections associated to the Seiberg-Witten geometry.
Is Depression Simply a Nonspecific Response to Brain Injury?
Strakowski, Stephen M.; Adler, Caleb M.; DelBello, Melissa P.
2013-01-01
Depressive disorders are among the most common ailments affecting humankind and some of the world’s leading causes of medical disability. Despite being common, disabling and a major public health problem, the etiology of depression is unknown. Indeed, investigators have suggested that the causes of depression are multiple and multi-factorial. With these considerations in mind, in this article we examine the hypothesis that our inability to identify the causes of depressive disorders is because depression is a nonspecific epiphenomenon of brain injury or insult arising through multiple pathways. PMID:23943470
DEMONSTRATION OF A MULTI-SCALE INTEGRATED MONITORING AND ASSESSMENT IN NY/NJ HARBOR
The Clean Water Act (CWA) requires states and tribes to assess the overall quality of their waters (Sec 305(b)), determine whether that quality is changing over time, identify problem areas and management actions necessary to resolve those problems, and evaluate the effectiveness...
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.
2016-01-01
Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.
NASA Astrophysics Data System (ADS)
Gurin, A. M.; Kovalev, O. B.
2013-06-01
The work is devoted to the mathematical modelling and numerical solution of the problems of conjugate micro-convection, which arises under the laser radiation action in the metal melt with surface-active refractory disperse components added for the modification, hardening, and doping of the treated surface. A multi-vortex structure of the melt flow has been obtained, the number of vortices in which depends on the surface tension variation, on the temperature and power of laser radiation. Special attention is paid to the numerical modelling of the behavior in the melt of the substrate of disperse admixture consisting of the tungsten carbide particles. The role of microconvection in the distribution of powder particles in the surface layer of the substrate after its cooling is shown.
Multi-resource and multi-scale approaches for meeting the challenge of managing multiple species
Frank R. Thompson; Deborah M. Finch; John R. Probst; Glen D. Gaines; David S. Dobkin
1999-01-01
The large number of Neotropical migratory bird (NTMB) species and their diverse habitat requirements create conflicts and difficulties for land managers and conservationists. We provide examples of assessments or conservation efforts that attempt to address the problem of managing for multiple NTMB species. We advocate approaches at a variety of spatial and geographic...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
Optimization-based mesh correction with volume and convexity constraints
D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...
2016-02-24
In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less
Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise
Murakami, Hisashi
2018-01-01
Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as Lévy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the Lévy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions. PMID:29689074
2017-01-01
The review is devoted to the physical, chemical, and technological aspects of the breath-figure self-assembly process. The main stages of the process and impact of the polymer architecture and physical parameters of breath-figure self-assembly on the eventual pattern are covered. The review is focused on the hierarchy of spatial and temporal scales inherent to breath-figure self-assembly. Multi-scale patterns arising from the process are addressed. The characteristic spatial lateral scales of patterns vary from nanometers to dozens of micrometers. The temporal scale of the process spans from microseconds to seconds. The qualitative analysis performed in the paper demonstrates that the process is mainly governed by interfacial phenomena, whereas the impact of inertia and gravity are negligible. Characterization and applications of polymer films manufactured with breath-figure self-assembly are discussed. PMID:28813026
A Multi-Scale Algorithm for Graffito Advertisement Detection from Images of Real Estate
NASA Astrophysics Data System (ADS)
Yang, Jun; Zhu, Shi-Jiao
There is a significant need to detect and extract the graffito advertisement embedded in the housing images automatically. However, it is a hard job to separate the advertisement region well since housing images generally have complex background. In this paper, a detecting algorithm which uses multi-scale Gabor filters to identify graffito regions is proposed. Firstly, multi-scale Gabor filters with different directions are applied to housing images, then the approach uses these frequency data to find likely graffito regions using the relationship of different channels, it exploits the ability of different filters technique to solve the detection problem with low computational efforts. Lastly, the method is tested on several real estate images which are embedded graffito advertisement to verify its robustness and efficiency. The experiments demonstrate graffito regions can be detected quite well.
NASA Astrophysics Data System (ADS)
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming
2017-07-01
Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
NASA Astrophysics Data System (ADS)
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
Relative locality and the soccer ball problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amelino-Camelia, Giovanni; Freidel, Laurent; Smolin, Lee
We consider the behavior of macroscopic bodies within the framework of relative locality [G. Amelino-Camelia, L. Freidel, J. Kowalski-Glikman, and L. Smolin, arXiv:1101.0931]. This is a recent proposal for Planck scale modifications of the relativistic dynamics of particles which are described as arising from deformations in the geometry of momentum space. We consider and resolve a common objection against such proposals, which is that, even if the corrections are small for elementary particles in current experiments, they are huge when applied to composite systems such as soccer balls, planets, and stars, with energies E{sub macro} much larger than M{sub P}.more » We show that this soccer ball problem does not arise within the framework of relative locality because the nonlinear effects for the dynamics of a composite system with N elementary particles appear at most of order E{sub macro}/N{center_dot}M{sub P}.« less
A Systematic Multi-Time Scale Solution for Regional Power Grid Operation
NASA Astrophysics Data System (ADS)
Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.
2017-10-01
Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
A new multi-scale method to reveal hierarchical modular structures in biological networks.
Jiao, Qing-Ju; Huang, Yan; Shen, Hong-Bin
2016-11-15
Biological networks are effective tools for studying molecular interactions. Modular structure, in which genes or proteins may tend to be associated with functional modules or protein complexes, is a remarkable feature of biological networks. Mining modular structure from biological networks enables us to focus on a set of potentially important nodes, which provides a reliable guide to future biological experiments. The first fundamental challenge in mining modular structure from biological networks is that the quality of the observed network data is usually low owing to noise and incompleteness in the obtained networks. The second problem that poses a challenge to existing approaches to the mining of modular structure is that the organization of both functional modules and protein complexes in networks is far more complicated than was ever thought. For instance, the sizes of different modules vary considerably from each other and they often form multi-scale hierarchical structures. To solve these problems, we propose a new multi-scale protocol for mining modular structure (named ISIMB) driven by a node similarity metric, which works in an iteratively converged space to reduce the effects of the low data quality of the observed network data. The multi-scale node similarity metric couples both the local and the global topology of the network with a resolution regulator. By varying this resolution regulator to give different weightings to the local and global terms in the metric, the ISIMB method is able to fit the shape of modules and to detect them on different scales. Experiments on protein-protein interaction and genetic interaction networks show that our method can not only mine functional modules and protein complexes successfully, but can also predict functional modules from specific to general and reveal the hierarchical organization of protein complexes.
Natural little hierarchy for SUSY from radiative breaking of the Peccei-Quinn symmetry
NASA Astrophysics Data System (ADS)
Bae, Kyu Jung; Baer, Howard; Serce, Hasan
2015-01-01
While LHC8 Higgs mass and sparticle search constraints favor a multi-TeV value of soft SUSY breaking terms, electroweak naturalness favors a superpotential Higgsino mass μ ˜100 - 200 GeV : the mismatch results in an apparent little hierarchy characterized by μ ≪msoft (with msoft˜m3 /2 in gravity mediation). It has been suggested that the little hierarchy arises from a mismatch between Peccei-Quinn (PQ) and hidden sector intermediate scales vPQ≪mhidden . We examine the Murayama-Suzuki-Yanagida model of radiatively driven PQ symmetry breaking which not only generates a weak scale value of μ but also produces intermediate scale Majorana masses for right-hand neutrinos. For this model, we show ranges of parameter choices with multi-TeV values of m3 /2 which can easily generate values of μ ˜100 - 200 GeV so that the apparent little hierarchy suggested from data emerges quite naturally. In such a scenario, dark matter would be comprised of an axion plus a Higgsino-like weakly-interacting massive particle admixture where the axion mass and Higgsino masses are linked by the value of the PQ scale. The required light Higgsinos should ultimately be detected at a linear e+e- collider with √{s }>2 m (Higgsino) .
On a theorem of existence for scaling problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osmolovskii, V.G.
1995-12-05
The authors study the question of the existence of the global minimum of the functional over the set of functions where {Omega} {contained_in} R{sup n} is a bounded domain, and a fixed function K (x,y) = K (y,x) belongs to L{sub 2} ({Omega} x {Omega}). Such functionals arise in some mathematical models of economics and sociology.
Improved Flux Formulations for Unsteady Low Mach Number Flows
2012-07-01
challenging problem since it requires the resolution of disparate time scales. Unsteady effects may arise from a combination of hydrodynamic effects...Many practical applications including rotorcraft flows, jets and shear layers include a combination of both acoustic and hydrodynamic effects...are computed independently as scalar formulations thus making it possible to independently tailor the dissipation for hydrodynamic and acoustic
SUSY’s Ladder: Reframing sequestering at Large Volume
Reece, Matthew; Xue, Wei
2016-04-07
Theories with approximate no-scale structure, such as the Large Volume Scenario, have a distinctive hierarchy of multiple mass scales in between TeV gaugino masses and the Planck scale, which we call SUSY's Ladder. This is a particular realization of Split Supersymmetry in which the same small parameter suppresses gaugino masses relative to scalar soft masses, scalar soft masses relative to the gravitino mass, and the UV cutoff or string scale relative to the Planck scale. This scenario has many phenomenologically interesting properties, and can avoid dangers including the gravitino problem, flavor problems, and the moduli-induced LSP problem that plague othermore » supersymmetric theories. We study SUSY's Ladder using a superspace formalism that makes the mysterious cancelations in previous computations manifest. This opens the possibility of a consistent effective field theory understanding of the phenomenology of these scenarios, based on power-counting in the small ratio of string to Planck scales. We also show that four-dimensional theories with approximate no-scale structure enforced by a single volume modulus arise only from two special higher-dimensional theories: five-dimensional supergravity and ten-dimensional type IIB supergravity. As a result, this gives a phenomenological argument in favor of ten dimensional ultraviolet physics which is different from standard arguments based on the consistency of superstring theory.« less
Spatial and Temporal Scaling of Thermal Infrared Remote Sensing Data
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Goel, Narendra S.
1995-01-01
Although remote sensing has a central role to play in the acquisition of synoptic data obtained at multiple spatial and temporal scales to facilitate our understanding of local and regional processes as they influence the global climate, the use of thermal infrared (TIR) remote sensing data in this capacity has received only minimal attention. This results from some fundamental challenges that are associated with employing TIR data collected at different space and time scales, either with the same or different sensing systems, and also from other problems that arise in applying a multiple scaled approach to the measurement of surface temperatures. In this paper, we describe some of the more important problems associated with using TIR remote sensing data obtained at different spatial and temporal scales, examine why these problems appear as impediments to using multiple scaled TIR data, and provide some suggestions for future research activities that may address these problems. We elucidate the fundamental concept of scale as it relates to remote sensing and explore how space and time relationships affect TIR data from a problem-dependency perspective. We also describe how linearity and non-linearity observation versus parameter relationships affect the quantitative analysis of TIR data. Some insight is given on how the atmosphere between target and sensor influences the accurate measurement of surface temperatures and how these effects will be compounded in analyzing multiple scaled TIR data. Last, we describe some of the challenges in modeling TIR data obtained at different space and time scales and discuss how multiple scaled TIR data can be used to provide new and important information for measuring and modeling land-atmosphere energy balance processes.
[The function, activity and participation: the occupational reintegration].
Zampolini, Mauro
2015-01-01
The return to work is a significant outcome after amputation. To reach this goal it is necessary to measure properly this process. Unfortunately, for amputee, we have different scales but often focused on specific groups of problems The International Classification of funtioning (ICF) can constitute the frame of reference where to converge scales available and according to which define problems related to disability. For the person amputated the theme of the return to work arises differently for the conditions traumatic and non-traumatic. For the first return to work is a priority given the younger age. For the latter, given the advanced age, the return to work is likely to be a measure of the success of rehabilitation is not particularly relevant.
Understanding electrical conduction in lithium ion batteries through multi-scale modeling
NASA Astrophysics Data System (ADS)
Pan, Jie
Silicon (Si) has been considered as a promising negative electrode material for lithium ion batteries (LIBs) because of its high theoretical capacity, low discharge voltage, and low cost. However, the utilization of Si electrode has been hampered by problems such as slow ionic transport, large stress/strain generation, and unstable solid electrolyte interphase (SEI). These problems severely influence the performance and cycle life of Si electrodes. In general, ionic conduction determines the rate performance of the electrode, while electron leakage through the SEI causes electrolyte decomposition and, thus, causes capacity loss. The goal of this thesis research is to design Si electrodes with high current efficiency and durability through a fundamental understanding of the ionic and electronic conduction in Si and its SEI. Multi-scale physical and chemical processes occur in the electrode during charging and discharging. This thesis, thus, focuses on multi-scale modeling, including developing new methods, to help understand these coupled physical and chemical processes. For example, we developed a new method based on ab initio molecular dynamics to study the effects of stress/strain on Li ion transport in amorphous lithiated Si electrodes. This method not only quantitatively shows the effect of stress on ionic transport in amorphous materials, but also uncovers the underlying atomistic mechanisms. However, the origin of ionic conduction in the inorganic components in SEI is different from that in the amorphous Si electrode. To tackle this problem, we developed a model by separating the problem into two scales: 1) atomistic scale: defect physics and transport in individual SEI components with consideration of the environment, e.g., LiF in equilibrium with Si electrode; 2) mesoscopic scale: defect distribution near the heterogeneous interface based on a space charge model. In addition, to help design better artificial SEI, we further demonstrated a theoretical design of multicomponent SEIs by utilizing the synergetic effect found in the natural SEI. We show that the electrical conduction can be optimized by varying the grain size and volume fraction of two phases in the artificial multicomponent SEI.
Metadata and annotations for multi-scale electrophysiological data.
Bower, Mark R; Stead, Matt; Brinkmann, Benjamin H; Dufendach, Kevin; Worrell, Gregory A
2009-01-01
The increasing use of high-frequency (kHz), long-duration (days) intracranial monitoring from multiple electrodes during pre-surgical evaluation for epilepsy produces large amounts of data that are challenging to store and maintain. Descriptive metadata and clinical annotations of these large data sets also pose challenges to simple, often manual, methods of data analysis. The problems of reliable communication of metadata and annotations between programs, the maintenance of the meanings within that information over long time periods, and the flexibility to re-sort data for analysis place differing demands on data structures and algorithms. Solutions to these individual problem domains (communication, storage and analysis) can be configured to provide easy translation and clarity across the domains. The Multi-scale Annotation Format (MAF) provides an integrated metadata and annotation environment that maximizes code reuse, minimizes error probability and encourages future changes by reducing the tendency to over-fit information technology solutions to current problems. An example of a graphical utility for generating and evaluating metadata and annotations for "big data" files is presented.
Causality and correlations between BSE and NYSE indexes: A Janus faced relationship
NASA Astrophysics Data System (ADS)
Neeraj; Panigrahi, Prasanta K.
2017-09-01
We study the multi-scale temporal correlations and causality connections between the New York Stock Exchange (NYSE) and Bombay Stock Exchange (BSE) monthly average closing price indexes for a period of 300 months, encompassing the time period of the liberalisation of the Indian economy and its gradual global exposure. In multi-scale analysis; clearly identifiable 1, 2 and 3 year non-stationary periodic modulations in NYSE and BSE have been observed, with NYSE commensurating changes in BSE at 3 years scale. Interestingly, at one year time scale, the two exchanges are phase locked only during the turbulent times, while at the scale of three year, in-phase nature is observed for a much longer time frame. The two year time period, having characteristics of both one and three year variations, acts as the transition regime. The normalised NYSE's stock value is found to Granger cause those of BSE, with a time lag of 9 months. Surprisingly, observed Granger causality of high frequency variations reveals BSE behaviour getting reflected in the NYSE index fluctuations, after a smaller time lag. This Janus faced relationship, shows that smaller stock exchanges may provide a natural setting for simulating market fluctuations of much bigger exchanges. This possibly arises due to the fact that high frequency fluctuations form an universal part of the financial time series, and are expected to exhibit similar characteristics in open market economies.
Effective surface and boundary conditions for heterogeneous surfaces with mixed boundary conditions
NASA Astrophysics Data System (ADS)
Guo, Jianwei; Veran-Tissoires, Stéphanie; Quintard, Michel
2016-01-01
To deal with multi-scale problems involving transport from a heterogeneous and rough surface characterized by a mixed boundary condition, an effective surface theory is developed, which replaces the original surface by a homogeneous and smooth surface with specific boundary conditions. A typical example corresponds to a laminar flow over a soluble salt medium which contains insoluble material. To develop the concept of effective surface, a multi-domain decomposition approach is applied. In this framework, velocity and concentration at micro-scale are estimated with an asymptotic expansion of deviation terms with respect to macro-scale velocity and concentration fields. Closure problems for the deviations are obtained and used to define the effective surface position and the related boundary conditions. The evolution of some effective properties and the impact of surface geometry, Péclet, Schmidt and Damköhler numbers are investigated. Finally, comparisons are made between the numerical results obtained with the effective models and those from direct numerical simulations with the original rough surface, for two kinds of configurations.
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Communication and cooperation in underwater acoustic networks
NASA Astrophysics Data System (ADS)
Yerramalli, Srinivas
In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.
Cosmological signatures of a UV-conformal standard model.
Dorsch, Glauber C; Huber, Stephan J; No, Jose Miguel
2014-09-19
Quantum scale invariance in the UV has been recently advocated as an attractive way of solving the gauge hierarchy problem arising in the standard model. We explore the cosmological signatures at the electroweak scale when the breaking of scale invariance originates from a hidden sector and is mediated to the standard model by gauge interactions (gauge mediation). These scenarios, while being hard to distinguish from the standard model at LHC, can give rise to a strong electroweak phase transition leading to the generation of a large stochastic gravitational wave signal in possible reach of future space-based detectors such as eLISA and BBO. This relic would be the cosmological imprint of the breaking of scale invariance in nature.
Beyond Darcy's law: The role of phase topology and ganglion dynamics for two-fluid flow
Armstrong, Ryan T.; McClure, James E.; Berrill, Mark A.; ...
2016-10-27
Relative permeability quantifies the ease at which immiscible phases flow through porous rock and is one of the most well known constitutive relationships for petroleum engineers. It however exhibits troubling dependencies on experimental conditions and is not a unique function of phase saturation as commonly accepted in industry practices. The problem lies in the multi-scale nature of the problem where underlying disequilibrium processes create anomalous macroscopic behavior. Here we show that relative permeability rate dependencies are explained by ganglion dynamic flow. We utilize fast X-ray micro-tomography and pore-scale simulations to identify unique flow regimes during the fractional flow of immisciblemore » phases and quantify the contribution of ganglion flux to the overall flux of non-wetting phase. We anticipate our approach to be the starting point for the development of sophisticated multi-scale flow models that directly link pore-scale parameters to macro-scale behavior. Such models will have a major impact on how we recover hydrocarbons from the subsurface, store sequestered CO 2 in geological formations, and remove non-aqueous environmental hazards from the vadose zone.« less
Obtaining lutein-rich extract from microalgal biomass at preparative scale.
Fernández-Sevilla, José M; Fernández, F Gabriel Acién; Grima, Emilio Molina
2012-01-01
Lutein extracts are in increasing demand due to their alleged role in the prevention of degenerative disorders such as age-related macular degeneration (AMD). Lutein extracts are currently obtained from plant sources, but microalgae have been demonstrated to be a competitive source likely to become an alternative. The extraction of lutein from microalgae posesses specific problems that arise from the different structure and composition of the source biomass. Here is presented a method for the recovery of lutein-rich carotenoid extracts from microalgal biomass in the kilogram scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw
The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
Kucharik, M.; Scovazzi, Guglielmo; Shashkov, Mikhail Jurievich; ...
2017-10-28
Hourglassing is a well-known pathological numerical artifact affecting the robustness and accuracy of Lagrangian methods. There exist a large number of hourglass control/suppression strategies. In the community of the staggered compatible Lagrangian methods, the approach of sub-zonal pressure forces is among the most widely used. However, this approach is known to add numerical strength to the solution, which can cause potential problems in certain types of simulations, for instance in simulations of various instabilities. To avoid this complication, we have adapted the multi-scale residual-based stabilization typically used in the finite element approach for staggered compatible framework. In this study, wemore » describe two discretizations of the new approach and demonstrate their properties and compare with the method of sub-zonal pressure forces on selected numerical problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
Multi-level multi-task learning for modeling cross-scale interactions in nested geospatial data
Yuan, Shuai; Zhou, Jiayu; Tan, Pang-Ning; Fergus, Emi; Wagner, Tyler; Sorrano, Patricia
2017-01-01
Predictive modeling of nested geospatial data is a challenging problem as the models must take into account potential interactions among variables defined at different spatial scales. These cross-scale interactions, as they are commonly known, are particularly important to understand relationships among ecological properties at macroscales. In this paper, we present a novel, multi-level multi-task learning framework for modeling nested geospatial data in the lake ecology domain. Specifically, we consider region-specific models to predict lake water quality from multi-scaled factors. Our framework enables distinct models to be developed for each region using both its local and regional information. The framework also allows information to be shared among the region-specific models through their common set of latent factors. Such information sharing helps to create more robust models especially for regions with limited or no training data. In addition, the framework can automatically determine cross-scale interactions between the regional variables and the local variables that are nested within them. Our experimental results show that the proposed framework outperforms all the baseline methods in at least 64% of the regions for 3 out of 4 lake water quality datasets evaluated in this study. Furthermore, the latent factors can be clustered to obtain a new set of regions that is more aligned with the response variables than the original regions that were defined a priori from the ecology domain.
NASA Astrophysics Data System (ADS)
Bothner, Thomas; Deift, Percy; Its, Alexander; Krasovsky, Igor
2015-08-01
We study the determinant , of the integrable Fredholm operator K s acting on the interval (-1, 1) with kernel . This determinant arises in the analysis of a log-gas of interacting particles in the bulk-scaling limit, at inverse temperature , in the presence of an external potential supported on an interval of length . We evaluate, in particular, the double scaling limit of as and , in the region , for any fixed . This problem was first considered by Dyson (Chen Ning Yang: A Great Physicist of the Twentieth Century. International Press, Cambridge, pp. 131-146, 1995).
Three essays on multi-level optimization models and applications
NASA Astrophysics Data System (ADS)
Rahdar, Mohammad
The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation problem in each node and decreasing the number of iterations. Computational experiments show that the proposed algorithm is faster than the existing ones.
Family physician practice visits arising from the Alberta Physician Achievement Review
2013-01-01
Background Licensed physicians in Alberta are required to participate in the Physician Achievement Review (PAR) program every 5 years, comprising multi-source feedback questionnaires with confidential feedback, and practice visits for a minority of physicians. We wished to identify and classify issues requiring change or improvement from the family practice visits, and the responses to advice. Methods Retrospective analysis of narrative practice visit reports data using a mixed methods design to study records of visits to 51 family physicians and general practitioners who participated in PAR during the period 2010 to 2011, and whose ratings in one or more major assessment domains were significantly lower than their peer group. Results Reports from visits to the practices of family physicians and general practitioners confirmed opportunities for change and improvement, with two main groupings – practice environment and physician performance. For 40/51 physicians (78%) suggested actions were discussed with physicians and changes were confirmed. Areas of particular concern included problems arising from practice isolation and diagnostic conclusions being reached with incomplete clinical evidence. Conclusion This study provides additional evidence for the construct validity of a regulatory authority educational program in which multi-source performance feedback identifies areas for practice quality improvement, and change is encouraged by supplementary contact for selected physicians. PMID:24010980
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
Recognition by Linear Combination of Models
1989-08-01
to the model (or to the viewed object) prior to, or during the matching stage. Such an approach is used in [Chien & Aggarwal 1987 , Faugeras & Hebert...1986, Fishler & Bolles 1981, Huttenlocher & Ullman 1987 , Lowe 1985, Thompson & Mundy 1987 , Ullman 1986]. Key problems that arise in any alignment...cludes 3-D rotation, translation and scaling, followed by an orthographic projection. The 1 transformation is determined as in [Huttenlocher & Ullman 1987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lippek, H.E.; Schuller, C.R.
1979-03-01
A study was conducted to identify major legal and institutional problems and issues in the transportation of spent fuel and associated processing wastes at the back end of the LWR nuclear fuel cycle. (Most of the discussion centers on the transportation of spent fuel, since this activity will involve virtually all of the legal and institutional problems likely to be encountered in moving waste materials, as well.) Actions or approaches that might be pursued to resolve the problems identified in the analysis are suggested. Two scenarios for the industrial-scale transportation of spent fuel and radioactive wastes, taken together, high-light mostmore » of the major problems and issues of a legal and institutional nature that are likely to arise: (1) utilizing the Allied General Nuclear Services (AGNS) facility at Barnwell, SC, as a temporary storage facility for spent fuel; and (2) utilizing AGNS for full-scale commercial reprocessing of spent LWR fuel.« less
Structure preserving parallel algorithms for solving the Bethe–Salpeter eigenvalue problem
Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; ...
2015-10-02
The Bethe–Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe–Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. In this paper, we establish the equivalence between Bethe–Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe–Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm–Dancoff approximation are overestimated. In order to solve large scale problemsmore » of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Finally, several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms.« less
Balanced Atmospheric Data Assimilation
NASA Astrophysics Data System (ADS)
Hastermann, Gottfried; Reinhardt, Maria; Klein, Rupert; Reich, Sebastian
2017-04-01
The atmosphere's multi-scale structure poses several major challenges in numerical weather prediction. One of these arises in the context of data assimilation. The large-scale dynamics of the atmosphere are balanced in the sense that acoustic or rapid internal wave oscillations generally come with negligibly small amplitudes. If triggered artificially, however, through inappropriate initialization or by data assimilation, such oscillations can have a detrimental effect on forecast quality as they interact with the moist aerothermodynamics of the atmosphere. In the setting of sequential Bayesian data assimilation, we therefore investigate two different strategies to reduce these artificial oscillations induced by the analysis step. On the one hand, we develop a new modification for a local ensemble transform Kalman filter, which penalizes imbalances via a minimization problem. On the other hand, we modify the first steps of the subsequent forecast to push the ensemble members back to the slow evolution. We therefore propose the use of certain asymptotically consistent integrators that can blend between the balanced and the unbalanced evolution model seamlessly. In our work, we furthermore present numerical results and performance of the proposed methods for two nonlinear ordinary differential equation models, where we can identify the different scales clearly. The first one is a Lorenz 96 model coupled with a wave equation. In this case the balance relation is linear and the imbalances are caused only by the localization of the filter. The second one is the elastic double pendulum where the balance relation itself is already highly nonlinear. In both cases the methods perform very well and could significantly reduce the imbalances and therefore increase the forecast quality of the slow variables.
Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr; Vlachos, Dionisios G.
We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less
A hybrid genetic algorithm for solving bi-objective traveling salesman problems
NASA Astrophysics Data System (ADS)
Ma, Mei; Li, Hecheng
2017-08-01
The traveling salesman problem (TSP) is a typical combinatorial optimization problem, in a traditional TSP only tour distance is taken as a unique objective to be minimized. When more than one optimization objective arises, the problem is known as a multi-objective TSP. In the present paper, a bi-objective traveling salesman problem (BOTSP) is taken into account, where both the distance and the cost are taken as optimization objectives. In order to efficiently solve the problem, a hybrid genetic algorithm is proposed. Firstly, two satisfaction degree indices are provided for each edge by considering the influences of the distance and the cost weight. The first satisfaction degree is used to select edges in a “rough” way, while the second satisfaction degree is executed for a more “refined” choice. Secondly, two satisfaction degrees are also applied to generate new individuals in the iteration process. Finally, based on genetic algorithm framework as well as 2-opt selection strategy, a hybrid genetic algorithm is proposed. The simulation illustrates the efficiency of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Dubra, Alfredo; Tam, Johnny
2016-03-01
Cone photoreceptors are highly specialized cells responsible for the origin of vision in the human eye. Their inner segments can be noninvasively visualized using adaptive optics scanning light ophthalmoscopes (AOSLOs) with nonconfocal split detection capabilities. Monitoring the number of cones can lead to more precise metrics for real-time diagnosis and assessment of disease progression. Cell identification in split detection AOSLO images is hindered by cell regions with heterogeneous intensity arising from shadowing effects and low contrast boundaries due to overlying blood vessels. Here, we present a multi-scale circular voting approach to overcome these challenges through the novel combination of: 1) iterative circular voting to identify candidate cells based on their circular structures, 2) a multi-scale strategy to identify the optimal circular voting response, and 3) clustering to improve robustness while removing false positives. We acquired images from three healthy subjects at various locations on the retina and manually labeled cell locations to create ground-truth for evaluating the detection accuracy. The images span a large range of cell densities. The overall recall, precision, and F1 score were 91±4%, 84±10%, and 87±7% (Mean±SD). Results showed that our method for the identification of cone photoreceptor inner segments performs well even with low contrast cell boundaries and vessel obscuration. These encouraging results demonstrate that the proposed approach can robustly and accurately identify cells in split detection AOSLO images.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less
Predicate calculus for an architecture of multiple neural networks
NASA Astrophysics Data System (ADS)
Consoli, Robert H.
1990-08-01
Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.
Distributed intelligent urban environment monitoring system
NASA Astrophysics Data System (ADS)
Du, Jinsong; Wang, Wei; Gao, Jie; Cong, Rigang
2018-02-01
The current environmental pollution and destruction have developed into a world-wide major social problem that threatens human survival and development. Environmental monitoring is the prerequisite and basis of environmental governance, but overall, the current environmental monitoring system is facing a series of problems. Based on the electrochemical sensor, this paper designs a small, low-cost, easy to layout urban environmental quality monitoring terminal, and multi-terminal constitutes a distributed network. The system has been small-scale demonstration applications and has confirmed that the system is suitable for large-scale promotion
Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways
Galinsky, Vitaly L.; Frank, Lawrence R.
2015-01-01
We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167
NASA Astrophysics Data System (ADS)
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu
2017-09-01
An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.
Stability results for multi-layer radial Hele-Shaw and porous media flows
NASA Astrophysics Data System (ADS)
Gin, Craig; Daripa, Prabir
2015-01-01
Motivated by stability problems arising in the context of chemical enhanced oil recovery, we perform linear stability analysis of Hele-Shaw and porous media flows in radial geometry involving an arbitrary number of immiscible fluids. Key stability results obtained and their relevance to the stabilization of fingering instability are discussed. Some of the key results, among many others, are (i) absolute upper bounds on the growth rate in terms of the problem data; (ii) validation of these upper bound results against exact computation for the case of three-layer flows; (iii) stability enhancing injection policies; (iv) asymptotic limits that reduce these radial flow results to similar results for rectilinear flows; and (v) the stabilizing effect of curvature of the interfaces. Multi-layer radial flows have been found to have the following additional distinguishing features in comparison to rectilinear flows: (i) very long waves, some of which can be physically meaningful, are stable; and (ii) eigenvalues can be complex for some waves depending on the problem data, implying that the dispersion curves for one or more waves can contact each other. Similar to the rectilinear case, these results can be useful in providing insight into the interfacial instability transfer mechanism as the problem data are varied. Moreover, these can be useful in devising smart injection policies as well as controlling the complexity of the long-term dynamics when drops of various immiscible fluids intersperse among each other. As an application of the upper bound results, we provide stabilization criteria and design an almost stable multi-layer system by adding many layers of fluid with small positive jumps in viscosity in the direction of the basic flow.
Interannual Atmospheric Variability Simulated by a Mars GCM: Impacts on the Polar Regions
NASA Technical Reports Server (NTRS)
Bridger, Alison F. C.; Haberle, R. M.; Hollingsworth, J. L.
2003-01-01
It is often assumed that in the absence of year-to-year dust variations, Mars weather and climate are very repeatable, at least on decadal scales. Recent multi-annual simulations of a Mars GCM reveal however that significant interannual variations may occur with constant dust conditions. In particular, interannual variability (IAV) appears to be associated with the spectrum of atmospheric disturbances that arise due to baroclinic instability. One quantity that shows significant IAV is the poleward heat flux associated with these waves. These variations and their impacts on the polar heat balance will be examined here.
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
NASA Astrophysics Data System (ADS)
DiPietro, Kelsey L.; Lindsay, Alan E.
2017-11-01
We present an efficient moving mesh method for the simulation of fourth order nonlinear partial differential equations (PDEs) in two dimensions using the Parabolic Monge-Ampére (PMA) equation. PMA methods have been successfully applied to the simulation of second order problems, but not on systems with higher order equations which arise in many topical applications. Our main application is the resolution of fine scale behavior in PDEs describing elastic-electrostatic interactions. The PDE system considered has multiple parameter dependent singular solution modalities, including finite time singularities and sharp interface dynamics. We describe how to construct a dynamic mesh algorithm for such problems which incorporates known self similar or boundary layer scalings of the underlying equation to locate and dynamically resolve fine scale solution features in these singular regimes. We find a key step in using the PMA equation for mesh generation in fourth order problems is the adoption of a high order representation of the transformation from the computational to physical mesh. We demonstrate the efficacy of the new method on a variety of examples and establish several new results and conjectures on the nature of self-similar singularity formation in higher order PDEs.
Multi-Scale Distributed Representation for Deep Learning and its Application to b-Jet Tagging
NASA Astrophysics Data System (ADS)
Lee, Jason Sang Hun; Park, Inkyu; Park, Sangnam
2018-06-01
Recently machine learning algorithms based on deep layered artificial neural networks (DNNs) have been applied to a wide variety of high energy physics problems such as jet tagging or event classification. We explore a simple but effective preprocessing step which transforms each realvalued observational quantity or input feature into a binary number with a fixed number of digits. Each binary digit represents the quantity or magnitude in different scales. We have shown that this approach improves the performance of DNNs significantly for some specific tasks without any further complication in feature engineering. We apply this multi-scale distributed binary representation to deep learning on b-jet tagging using daughter particles' momenta and vertex information.
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
Processor farming in two-level analysis of historical bridge
NASA Astrophysics Data System (ADS)
Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.
2017-11-01
This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.
Graph Design via Convex Optimization: Online and Distributed Perspectives
NASA Astrophysics Data System (ADS)
Meng, De
Network and graph have long been natural abstraction of relations in a variety of applications, e.g. transportation, power system, social network, communication, electrical circuit, etc. As a large number of computation and optimization problems are naturally defined on graphs, graph structures not only enable important properties of these problems, but also leads to highly efficient distributed and online algorithms. For example, graph separability enables the parallelism for computation and operation as well as limits the size of local problems. More interestingly, graphs can be defined and constructed in order to take best advantage of those problem properties. This dissertation focuses on graph structure and design in newly proposed optimization problems, which establish a bridge between graph properties and optimization problem properties. We first study a new optimization problem called Geodesic Distance Maximization Problem (GDMP). Given a graph with fixed edge weights, finding the shortest path, also known as the geodesic, between two nodes is a well-studied network flow problem. We introduce the Geodesic Distance Maximization Problem (GDMP): the problem of finding the edge weights that maximize the length of the geodesic subject to convex constraints on the weights. We show that GDMP is a convex optimization problem for a wide class of flow costs, and provide a physical interpretation using the dual. We present applications of the GDMP in various fields, including optical lens design, network interdiction, and resource allocation in the control of forest fires. We develop an Alternating Direction Method of Multipliers (ADMM) by exploiting specific problem structures to solve large-scale GDMP, and demonstrate its effectiveness in numerical examples. We then turn our attention to distributed optimization on graph with only local communication. Distributed optimization arises in a variety of applications, e.g. distributed tracking and localization, estimation problems in sensor networks, multi-agent coordination. Distributed optimization aims to optimize a global objective function formed by summation of coupled local functions over a graph via only local communication and computation. We developed a weighted proximal ADMM for distributed optimization using graph structure. This fully distributed, single-loop algorithm allows simultaneous updates and can be viewed as a generalization of existing algorithms. More importantly, we achieve faster convergence by jointly designing graph weights and algorithm parameters. Finally, we propose a new problem on networks called Online Network Formation Problem: starting with a base graph and a set of candidate edges, at each round of the game, player one first chooses a candidate edge and reveals it to player two, then player two decides whether to accept it; player two can only accept limited number of edges and make online decisions with the goal to achieve the best properties of the synthesized network. The network properties considered include the number of spanning trees, algebraic connectivity and total effective resistance. These network formation games arise in a variety of cooperative multiagent systems. We propose a primal-dual algorithm framework for the general online network formation game, and analyze the algorithm performance by the competitive ratio and regret.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
NASA Astrophysics Data System (ADS)
Ghezavati, V. R.; Beigi, M.
2016-12-01
During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.
Styopin, Nikita E; Vershinin, Anatoly V; Zingerman, Konstantin M; Levin, Vladimir A
2016-09-01
Different variants of the Uzawa algorithm are compared with one another. The comparison is performed for the case in which this algorithm is applied to large-scale systems of linear algebraic equations. These systems arise in the finite-element solution of the problems of elasticity theory for incompressible materials. A modification of the Uzawa algorithm is proposed. Computational experiments show that this modification improves the convergence of the Uzawa algorithm for the problems of solid mechanics. The results of computational experiments show that each variant of the Uzawa algorithm considered has its advantages and disadvantages and may be convenient in one case or another.
An adaptive framework to differentiate receiving water quality impacts on a multi-scale level.
Blumensaat, F; Tränckner, J; Helm, B; Kroll, S; Dirckx, G; Krebs, P
2013-01-01
The paradigm shift in recent years towards sustainable and coherent water resources management on a river basin scale has changed the subject of investigations to a multi-scale problem representing a great challenge for all actors participating in the management process. In this regard, planning engineers often face an inherent conflict to provide reliable decision support for complex questions with a minimum of effort. This trend inevitably increases the risk to base decisions upon uncertain and unverified conclusions. This paper proposes an adaptive framework for integral planning that combines several concepts (flow balancing, water quality monitoring, process modelling, multi-objective assessment) to systematically evaluate management strategies for water quality improvement. As key element, an S/P matrix is introduced to structure the differentiation of relevant 'pressures' in affected regions, i.e. 'spatial units', which helps in handling complexity. The framework is applied to a small, but typical, catchment in Flanders, Belgium. The application to the real-life case shows: (1) the proposed approach is adaptive, covers problems of different spatial and temporal scale, efficiently reduces complexity and finally leads to a transparent solution; and (2) water quality and emission-based performance evaluation must be done jointly as an emission-based performance improvement does not necessarily lead to an improved water quality status, and an assessment solely focusing on water quality criteria may mask non-compliance with emission-based standards. Recommendations derived from the theoretical analysis have been put into practice.
The development of a multi-dimensional gambling accessibility scale.
Hing, Nerilee; Haw, John
2009-12-01
The aim of the current study was to develop a scale of gambling accessibility that would have theoretical significance to exposure theory and also serve to highlight the accessibility risk factors for problem gambling. Scale items were generated from the Productivity Commission's (Australia's Gambling Industries: Report No. 10. AusInfo, Canberra, 1999) recommendations and tested on a group with high exposure to the gambling environment. In total, 533 gaming venue employees (aged 18-70 years; 67% women) completed a questionnaire that included six 13-item scales measuring accessibility across a range of gambling forms (gaming machines, keno, casino table games, lotteries, horse and dog racing, sports betting). Also included in the questionnaire was the Problem Gambling Severity Index (PGSI) along with measures of gambling frequency and expenditure. Principal components analysis indicated that a common three factor structure existed across all forms of gambling and these were labelled social accessibility, physical accessibility and cognitive accessibility. However, convergent validity was not demonstrated with inconsistent correlations between each subscale and measures of gambling behaviour. These results are discussed in light of exposure theory and the further development of a multi-dimensional measure of gambling accessibility.
Beyond ideal magnetohydrodynamics: from fibration to 3 + 1 foliation
NASA Astrophysics Data System (ADS)
Andersson, N.; Hawke, I.; Dionysopoulou, K.; Comer, G. L.
2017-06-01
We consider a resistive multi-fluid framework from the 3 + 1 space-time foliation point-of-view, paying particular attention to issues relating to the use of multi-parameter equations of state and the associated inversion from evolved to primitive variables. We highlight relevant numerical issues that arise for general systems with relative flows. As an application of the new formulation, we consider a three-component system relevant for hot neutron stars. In this case we let the baryons (neutrons and protons) move together, but allow heat and electrons to exhibit relative flow. This reduces the problem to three momentum equations; overall energy-momentum conservation, a generalised Ohm’s law and a heat equation. Our results provide a hierarchy of increasingly complex models and prepare the ground for new state-of-the-art simulations of relevant scenarios in relativistic astrophysics.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2012-01-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2011-12-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Zhaojun; Yang, Chao
What is common among electronic structure calculation, design of MEMS devices, vibrational analysis of high speed railways, and simulation of the electromagnetic field of a particle accelerator? The answer: they all require solving large scale nonlinear eigenvalue problems. In fact, these are just a handful of examples in which solving nonlinear eigenvalue problems accurately and efficiently is becoming increasingly important. Recognizing the importance of this class of problems, an invited minisymposium dedicated to nonlinear eigenvalue problems was held at the 2005 SIAM Annual Meeting. The purpose of the minisymposium was to bring together numerical analysts and application scientists to showcasemore » some of the cutting edge results from both communities and to discuss the challenges they are still facing. The minisymposium consisted of eight talks divided into two sessions. The first three talks focused on a type of nonlinear eigenvalue problem arising from electronic structure calculations. In this type of problem, the matrix Hamiltonian H depends, in a non-trivial way, on the set of eigenvectors X to be computed. The invariant subspace spanned by these eigenvectors also minimizes a total energy function that is highly nonlinear with respect to X on a manifold defined by a set of orthonormality constraints. In other applications, the nonlinearity of the matrix eigenvalue problem is restricted to the dependency of the matrix on the eigenvalues to be computed. These problems are often called polynomial or rational eigenvalue problems In the second session, Christian Mehl from Technical University of Berlin described numerical techniques for solving a special type of polynomial eigenvalue problem arising from vibration analysis of rail tracks excited by high-speed trains.« less
Ground-motion signature of dynamic ruptures on rough faults
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.
2016-04-01
Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Kirtiman; Homi Bhabha National Institute, Mumbai; Jana, Sudip
We consider the collider phenomenology of a simple extension of the Standard Model (SM), which consists of an EW isospinmore » $3/2$ scalar, $$\\Delta$$ and a pair of EW isospin $1$ vector like fermions, $$\\Sigma$$ and $$\\bar{\\Sigma}$$, responsible for generating tiny neutrino mass via the effective dimension seven operator. This scalar quadruplet with hypercharge Y = 3 has a plethora of implications at the collider experiments. Its signatures at TeV scale colliders are expected to be seen, if the quadruplet masses are not too far above the electroweak symmetry breaking scale. In this article, we study the phenomenology of multi-charged quadruplet scalars. In particular, we study the multi-lepton signatures at the Large Hadron Collider (LHC) experiment, arising from the production and decays of triply and doubly charged scalars. We studied Drell-Yan (DY) pair production as well as pair production of the charged scalars via photon-photon fusion. For doubly and triply charged scalars, photon fusion contributes significantly for large scalar masses. We also studied LHC constraints on the masses of doubly charged scalars in this model. We derive a lower mass limit of 725 GeV on doubly charged quadruplet scalar.« less
Ghosh, Kirtiman; Homi Bhabha National Institute, Mumbai; Jana, Sudip; ...
2018-03-29
We consider the collider phenomenology of a simple extension of the Standard Model (SM), which consists of an EW isospinmore » $3/2$ scalar, $$\\Delta$$ and a pair of EW isospin $1$ vector like fermions, $$\\Sigma$$ and $$\\bar{\\Sigma}$$, responsible for generating tiny neutrino mass via the effective dimension seven operator. This scalar quadruplet with hypercharge Y = 3 has a plethora of implications at the collider experiments. Its signatures at TeV scale colliders are expected to be seen, if the quadruplet masses are not too far above the electroweak symmetry breaking scale. In this article, we study the phenomenology of multi-charged quadruplet scalars. In particular, we study the multi-lepton signatures at the Large Hadron Collider (LHC) experiment, arising from the production and decays of triply and doubly charged scalars. We studied Drell-Yan (DY) pair production as well as pair production of the charged scalars via photon-photon fusion. For doubly and triply charged scalars, photon fusion contributes significantly for large scalar masses. We also studied LHC constraints on the masses of doubly charged scalars in this model. We derive a lower mass limit of 725 GeV on doubly charged quadruplet scalar.« less
Bachler, Egon; Fruehmann, Alexander; Bachler, Herbert; Aas, Benjamin; Nickel, Marius; Schiepek, Guenter K.
2017-01-01
Objective: The present study validates the Multi-Problem Family (MPF)-Collaboration Scale), which measures the progress of goal directed collaboration of patients in the treatment of families with MPF and its relation to drop-out rates and treatment outcome. Method: Naturalistic study of symptom and competence-related changes in children of ages 4–18 and their caregivers. Setting: Integrative, structural outreach family therapy. Measures: The data of five different groups of goal directed collaboration (deteriorating collaboration, stable low collaboration, stable medium collaboration, stable high collaboration, improving collaboration) were analyzed in their relation to treatment expectation, individual therapeutic goals (ITG), family adversity index, severity of problems and global assessment of a caregiver’s functioning, child, and relational aspects. Results: From N = 810 families, 20% displayed stable high collaboration (n = 162) and 21% had a pattern of improving collaboration. The families with stable high or improving collaboration rates achieved significantly more progress throughout therapy in terms of treatment outcome expectancy (d = 0.96; r = 0.43), reaching ITG (d = 1.17; r = 0.50), family adversities (d = 0.55; r = 0.26), and severity of psychiatric symptoms (d = 0.31; r = 0.15). Furthermore, families with stable high or improving collaboration maintained longer treatments and had a bigger chance of finishing the therapy as planned. The odds of having a stable low or deteriorating collaboration throughout treatment were significantly higher for subjects who started treatment with low treatment expectation or high family-related adversities. Conclusion: The positive outcomes of homebased interventions for multi-problem families are closely related to “stable high” and an “improving” collaboration as measured with the MPF-Collaboration Scale. Patients who fall into these groups have a high treatment outcome expectancy and reduce psychological stress. For therapeutic interventions with multi-problem families it seems beneficial to maintain a stable high collaboration or help the collaboration, e.g., by fostering treatment expectation. PMID:28785232
A Scalable and Robust Multi-Agent Approach to Distributed Optimization
NASA Technical Reports Server (NTRS)
Tumer, Kagan
2005-01-01
Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.
Triangles with Integer Dimensions
ERIC Educational Resources Information Center
Gilbertson, Nicholas J.; Rogers, Kimberly Cervello
2016-01-01
Interesting and engaging mathematics problems can come from anywhere. Sometimes great problems arise from interesting contexts. At other times, interesting problems arise from asking "what if" questions while appreciating the structure and beauty of mathematics. The intriguing problem described in this article resulted from the second…
NASA Astrophysics Data System (ADS)
Marović, Ivan; Hanak, Tomaš
2017-10-01
In the management of construction projects special attention should be given to the planning as the most important phase of decision-making process. Quality decision-making based on adequate and comprehensive collaboration of all involved stakeholders is crucial in project’s early stages. Fundamental reasons for existence of this problem arise from: specific conditions of construction industry (final products are inseparable from the location i.e. location has a strong influence of building design and its structural characteristics as well as technology which will be used during construction), investors’ desires and attitudes, and influence of socioeconomic and environment aspects. Considering all mentioned reasons one can conclude that selection of adequate construction site location for future investment is complex, low structured and multi-criteria problem. To take into account all the dimensions, the proposed model for selection of adequate site location is devised. The model is based on AHP (for designing the decision-making hierarchy) and PROMETHEE (for pairwise comparison of investment locations) methods. As a result of mixing basis feature of both methods, operational synergies can be achieved in multi-criteria decision analysis. Such gives the decision-maker a sense of assurance, knowing that if the procedure proposed by the presented model has been followed, it will lead to a rational decision, carefully and systematically thought out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNab, W; Ezzedine, S; Detwiler, R
2007-02-26
Industrial organic solvents such as trichloroethylene (TCE) and tetrachloroethylene (PCE) constitute a principal class of groundwater contaminants. Cleanup of groundwater plume source areas associated with these compounds is problematic, in part, because the compounds often exist in the subsurface as dense nonaqueous phase liquids (DNAPLs). Ganglia (or 'blobs') of DNAPL serve as persistent sources of contaminants that are difficult to locate and remediate (e.g. Fenwick and Blunt, 1998). Current understanding of the physical and chemical processes associated with dissolution of DNAPLs in the subsurface is incomplete and yet is critical for evaluating long-term behavior of contaminant migration, groundwater cleanup, andmore » the efficacy of source area cleanup technologies. As such, a goal of this project has been to contribute to this critical understanding by investigating the multi-phase, multi-component physics of DNAPL dissolution using state-of-the-art experimental and computational techniques. Through this research, we have explored efficient and accurate conceptual and numerical models for source area contaminant transport that can be used to better inform the modeling of source area contaminants, including those at the LLNL Superfund sites, to re-evaluate existing remediation technologies, and to inspire or develop new remediation strategies. The problem of DNAPL dissolution in natural porous media must be viewed in the context of several scales (Khachikian and Harmon, 2000), including the microscopic level at which capillary forces, viscous forces, and gravity/buoyancy forces are manifested at the scale of individual pores (Wilson and Conrad, 1984; Chatzis et al., 1988), the mesoscale where dissolution rates are strongly influenced by the local hydrodynamics, and the field-scale. Historically, the physico-chemical processes associated with DNAPL dissolution have been addressed through the use of lumped mass transfer coefficients which attempt to quantify the dissolution rate in response to local dissolved-phase concentrations distributed across the source area using a volume-averaging approach (Figure 1). The fundamental problem with the lumped mass transfer parameter is that its value is typically derived empirically through column-scale experiments that combine the effects of pore-scale flow, diffusion, and pore-scale geometry in a manner that does not provide a robust theoretical basis for upscaling. In our view, upscaling processes from the pore-scale to the field-scale requires new computational approaches (Held and Celia, 2001) that are directly linked to experimental studies of dissolution at the pore scale. As such, our investigation has been multi-pronged, combining theory, experiments, numerical modeling, new data analysis approaches, and a synthesis of previous studies (e.g. Glass et al, 2001; Keller et al., 2002) aimed at quantifying how the mechanisms controlling dissolution at the pore-scale control the long-term dissolution of source areas at larger scales.« less
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.
2018-06-01
The necessity to find the global optimum of multiextremal functions arises in many applied problems where finding local solutions is insufficient. One of the desirable properties of global optimization methods is strong homogeneity meaning that a method produces the same sequences of points where the objective function is evaluated independently both of multiplication of the function by a scaling constant and of adding a shifting constant. In this paper, several aspects of global optimization using strongly homogeneous methods are considered. First, it is shown that even if a method possesses this property theoretically, numerically very small and large scaling constants can lead to ill-conditioning of the scaled problem. Second, a new class of global optimization problems where the objective function can have not only finite but also infinite or infinitesimal Lipschitz constants is introduced. Third, the strong homogeneity of several Lipschitz global optimization algorithms is studied in the framework of the Infinity Computing paradigm allowing one to work numerically with a variety of infinities and infinitesimals. Fourth, it is proved that a class of efficient univariate methods enjoys this property for finite, infinite and infinitesimal scaling and shifting constants. Finally, it is shown that in certain cases the usage of numerical infinities and infinitesimals can avoid ill-conditioning produced by scaling. Numerical experiments illustrating theoretical results are described.
Mechanical vibration compensation method for 3D+t multi-particle tracking in microscopic volumes.
Pimentel, A; Corkidi, G
2009-01-01
The acquisition and analysis of data in microscopic systems with spatiotemporal evolution is a very relevant topic. In this work, we describe a method to optimize an experimental setup for acquiring and processing spatiotemporal (3D+t) data in microscopic systems. The method is applied to a three-dimensional multi-tracking and analysis system of free-swimming sperm trajectories previously developed. The experimental set uses a piezoelectric device making oscillate a large focal-distance objective mounted on an inverted microscope (over its optical axis) to acquire stacks of images at a high frame rate over a depth on the order of 250 microns. A problem arise when the piezoelectric device oscillates, in such a way that a vibration is transmitted to the whole microscope, inducing undesirable 3D vibrations to the whole set. For this reason, as a first step, the biological preparation was isolated from the body of the microscope to avoid modifying the free swimming pattern of the microorganism due to the transmission of these vibrations. Nevertheless, as the image capturing device is mechanically attached to the "vibrating" microscope, the resulting acquired data are contaminated with an undesirable 3D movement that biases the original trajectory of these high speed moving cells. The proposed optimization method determines the functional form of these 3D oscillations to neutralize them from the original acquired data set. Given the spatial scale of the system, the added correction increases significantly the data accuracy. The optimized system may be very useful in a wide variety of 3D+t applications using moving optical devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.
2011-10-01
Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less
The application of CFD to the modelling of fires in complex geometries
NASA Astrophysics Data System (ADS)
Burns, A. D.; Clarke, D. S.; Guilbert, P.; Jones, I. P.; Simcox, S.; Wilkes, N. S.
The application of Computational Fluid Dynamics (CFD) to industrial safety is a challenging activity. In particular it involves the interaction of several different physical processes, including turbulence, combustion, radiation, buoyancy, compressible flow and shock waves in complex three-dimensional geometries. In addition, there may be multi-phase effects arising, for example, from sprinkler systems for extinguishing fires. The FLOW3D software (1-3) from Computational Fluid Dynamics Services (CFDS) is in widespread use in industrial safety problems, both within AEA Technology, and also by CFDS's commercial customers, for example references (4-13). This paper discusses some other applications of FLOW3D to safety problems. These applications illustrate the coupling of the gas flows with radiation models and combustion models, particularly for complex geometries where simpler radiation models are not applicable.
Quantum Barro-Gordon game in monetary economics
NASA Astrophysics Data System (ADS)
Samadi, Ali Hussein; Montakhab, Afshin; Marzban, Hussein; Owjimehr, Sakine
2018-01-01
Classical game theory addresses decision problems in multi-agent environment where one rational agent's decision affects other agents' payoffs. Game theory has widespread application in economic, social and biological sciences. In recent years quantum versions of classical games have been proposed and studied. In this paper, we consider a quantum version of the classical Barro-Gordon game which captures the problem of time inconsistency in monetary economics. Such time inconsistency refers to the temptation of weak policy maker to implement high inflation when the public expects low inflation. The inconsistency arises when the public punishes the weak policy maker in the next cycle. We first present a quantum version of the Barro-Gordon game. Next, we show that in a particular case of the quantum game, time-consistent Nash equilibrium could be achieved when public expects low inflation, thus resolving the game.
Ewing, Graham E.
2009-01-01
There is a compelling argument that the occurrence of regressive autism is attributable to genetic and chromosomal abnormalities, arising from the overuse of vaccines, which subsequently affects the stability and function of the autonomic nervous system and physiological systems. That sense perception is linked to the autonomic nervous system and the function of the physiological systems enables us to examine the significance of autistic symptoms from a systemic perspective. Failure of the excretory system influences elimination of heavy metals and facilitates their accumulation and subsequent manifestation as neurotoxins: the long-term consequences of which would lead to neurodegeneration, cognitive and developmental problems. It may also influence regulation of neural hyperthermia. This article explores the issues and concludes that sensory dysfunction and systemic failure, manifested as autism, is the inevitable consequence arising from subtle DNA alteration and consequently from the overuse of vaccines. PMID:22666668
Representative Structural Element - A New Paradigm for Multi-Scale Structural Modeling
2016-07-05
developed by NASA Glenn Research Center based on Aboudi’s micromechanics theories [5] that provides a wide range of capabilities for modeling ...to use appropriate models for related problems based on the capability of corresponding approaches. Moreover, the analyses will give a general...interface of heterogeneous materials but also help engineers to use appropriate models for related problems based on the capability of corresponding
Turrini, Enrico; Carnevale, Claudio; Finzi, Giovanna; Volta, Marialuisa
2018-04-15
This paper introduces the MAQ (Multi-dimensional Air Quality) model aimed at defining cost-effective air quality plans at different scales (urban to national) and assessing the co-benefits for GHG emissions. The model implements and solves a non-linear multi-objective, multi-pollutant decision problem where the decision variables are the application levels of emission abatement measures allowing the reduction of energy consumption, end-of pipe technologies and fuel switch options. The objectives of the decision problem are the minimization of tropospheric secondary pollution exposure and of internal costs. The model assesses CO 2 equivalent emissions in order to support decision makers in the selection of win-win policies. The methodology is tested on Lombardy region, a heavily polluted area in northern Italy. Copyright © 2017 Elsevier B.V. All rights reserved.
Putting the Second Law to Work
NASA Astrophysics Data System (ADS)
Widmer, Thomas F.
2008-08-01
Thermo Electron Corporation was founded in 1956 by Dr. George Hatsopoulos with the goal of applying thermodynamics to the solution of energy problems throughout society. As the company grew from a small research laboratory to a multi-billion dollar Fortune 500 enterprise, the Second Law of thermodynamics played a pivotal role in creating a diversified portfolio of products and services. George and his staff also employed thermodynamics, particularly availability analyses of energy processes, to help guide changes in National policy arising from the 1973 oil embargo. As directors of the company, Professors Joseph Keenan and Elias Gyftopoulos made key contributions to the strategy of applying the Second Law to real-world engineering challenges.
Deb, Kalyanmoy; Sinha, Ankur
2010-01-01
Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.
NASA Technical Reports Server (NTRS)
Glaese, John R.; McDonald, Emmett J.
2000-01-01
Orbiting space solar power systems are currently being investigated for possible flight in the time frame of 2015-2020 and later. Such space solar power (SSP) satellites are required to be extremely large in order to make practical the process of collection, conversion to microwave radiation, and reconversion to electrical power at earth stations or at remote locations in space. These large structures are expected to be very flexible presenting unique problems associated with their dynamics and control. The purpose of this project is to apply the expanded TREETOPS multi-body dynamics analysis computer simulation program (with expanded capabilities developed in the previous activity) to investigate the control problems associated with the integrated symmetrical concentrator (ISC) conceptual SSP system. SSP satellites are, as noted, large orbital systems having many bodies (perhaps hundreds) with flexible arrays operating in an orbiting environment where the non-uniform gravitational forces may be the major load producers on the structure so that a high fidelity gravity model is required. The current activity arises from our NRA8-23 SERT proposal. Funding, as a supplemental selection, has been provided by NASA with reduced scope from that originally proposed.
Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU
NASA Astrophysics Data System (ADS)
Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.
2016-09-01
The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.
NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data.
Pnevmatikakis, Eftychios A; Giovannucci, Andrea
2017-11-01
Motion correction is a challenging pre-processing problem that arises early in the analysis pipeline of calcium imaging data sequences. The motion artifacts in two-photon microscopy recordings can be non-rigid, arising from the finite time of raster scanning and non-uniform deformations of the brain medium. We introduce an algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching. NoRMCorre operates by splitting the field of view (FOV) into overlapping spatial patches along all directions. The patches are registered at a sub-pixel resolution for rigid translation against a regularly updated template. The estimated alignments are subsequently up-sampled to create a smooth motion field for each frame that can efficiently approximate non-rigid artifacts in a piecewise-rigid manner. Existing approaches either do not scale well in terms of computational performance or are targeted to non-rigid artifacts arising just from the finite speed of raster scanning, and thus cannot correct for non-rigid motion observable in datasets from a large FOV. NoRMCorre can be run in an online mode resulting in comparable to or even faster than real time motion registration of streaming data. We evaluate its performance with simple yet intuitive metrics and compare against other non-rigid registration methods on simulated data and in vivo two-photon calcium imaging datasets. Open source Matlab and Python code is also made available. The proposed method and accompanying code can be useful for solving large scale image registration problems in calcium imaging, especially in the presence of non-rigid deformations. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
2006-08-01
and analytical techniques. Materials with larger grains, such as gamma titanium aluminide , can be instrumented with strain gages on each grain...scale. Materials such as Ti-15-Al-33Nb(at.%) have a significantly smaller microstructure than gamma titanium aluminide , therefore strain gages can...contact fatigue problems that arise at the blade -disk interface in aircraft engines. The stress fields can be used to predict the performance of
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
Extensions of the standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramond, P.
1983-01-01
In these lectures we focus on several issues that arise in theoretical extensions of the standard model. First we describe the kinds of fermions that can be added to the standard model without affecting known phenomenology. We focus in particular on three types: the vector-like completion of the existing fermions as would be predicted by a Kaluza-Klein type theory, which we find cannot be realistically achieved without some chiral symmetry; fermions which are vector-like by themselves, such as do appear in supersymmetric extensions, and finally anomaly-free chiral sets of fermions. We note that a chiral symmetry, such as the Peccei-Quinnmore » symmetry can be used to produce a vector-like theory which, at scales less than M/sub W/, appears to be chiral. Next, we turn to the analysis of the second hierarchy problem which arises in Grand Unified extensions of the standard model, and plays a crucial role in proton decay of supersymmetric extensions. We review the known mechanisms for avoiding this problem and present a new one which seems to lead to the (family) triplication of the gauge group. Finally, this being a summer school, we present a list of homework problems. 44 references.« less
The Problems of Diagnosis and Remediation of Dyscalculia.
ERIC Educational Resources Information Center
Price, Nigel; Youe, Simon
2000-01-01
Focuses on the problems of diagnosis and remediation of dyscalculia. Explores whether there is justification for believing that specific difficulty with mathematics arises jointly with a specific language problem, or whether a specific difficulty with mathematics can arise independently of problems with language. Uses a case study to illuminate…
A Multi-Scale Integrated Approach to Representing Watershed Systems: Significance and Challenges
NASA Astrophysics Data System (ADS)
Kim, J.; Ivanov, V. Y.; Katopodes, N.
2013-12-01
A range of processes associated with supplying services and goods to human society originate at the watershed level. Predicting watershed response to forcing conditions has been of high interest to many practical societal problems, however, remains challenging due to two significant properties of the watershed systems, i.e., connectivity and non-linearity. Connectivity implies that disturbances arising at any larger scale will necessarily propagate and affect local-scale processes; their local effects consequently influence other processes, and often convey nonlinear relationships. Physically-based, process-scale modeling is needed to approach the understanding and proper assessment of non-linear effects between the watershed processes. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion and sediment transport, tRIBS-OFM-HRM (Triangulated irregular network - based Real time Integrated Basin Simulator-Overland Flow Model-Hairsine and Rose Model). This coupled model offers the advantage of exploring the hydrological effects of watershed physical factors such as topography, vegetation, and soil, as well as their feedback mechanisms. Several examples investigating the effects of vegetation on flow movement, the role of soil's substrate on sediment dynamics, and the driving role of topography on morphological processes are illustrated. We show how this comprehensive modeling tool can help understand interconnections and nonlinearities of the physical system, e.g., how vegetation affects hydraulic resistance depending on slope, vegetation cover fraction, discharge, and bed roughness condition; how the soil's substrate condition impacts erosion processes with an non-unique characteristic at the scale of a zero-order catchment; and how topographic changes affect spatial variations of morphologic variables. Due to feedback and compensatory nature of mechanisms operating in different watershed compartments, our conclusion is that a key to representing watershed systems lies in an integrated, interdisciplinary approach, whereby a physically-based model is used for assessments/evaluations associated with future changes in landuse, climate, and ecosystems.
Multi-scale and multi-domain computational astrophysics.
van Elteren, Arjen; Pelupessy, Inti; Zwart, Simon Portegies
2014-08-06
Astronomical phenomena are governed by processes on all spatial and temporal scales, ranging from days to the age of the Universe (13.8 Gyr) as well as from kilometre size up to the size of the Universe. This enormous range in scales is contrived, but as long as there is a physical connection between the smallest and largest scales it is important to be able to resolve them all, and for the study of many astronomical phenomena this governance is present. Although covering all these scales is a challenge for numerical modellers, the most challenging aspect is the equally broad and complex range in physics, and the way in which these processes propagate through all scales. In our recent effort to cover all scales and all relevant physical processes on these scales, we have designed the Astrophysics Multipurpose Software Environment (AMUSE). AMUSE is a Python-based framework with production quality community codes and provides a specialized environment to connect this plethora of solvers to a homogeneous problem-solving environment. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Electrical stimulus artifact cancellation and neural spike detection on large multi-electrode arrays
Grosberg, Lauren E.; Madugula, Sasidhar; Litke, Alan; Cunningham, John; Chichilnisky, E. J.; Paninski, Liam
2017-01-01
Simultaneous electrical stimulation and recording using multi-electrode arrays can provide a valuable technique for studying circuit connectivity and engineering neural interfaces. However, interpreting these measurements is challenging because the spike sorting process (identifying and segregating action potentials arising from different neurons) is greatly complicated by electrical stimulation artifacts across the array, which can exhibit complex and nonlinear waveforms, and overlap temporarily with evoked spikes. Here we develop a scalable algorithm based on a structured Gaussian Process model to estimate the artifact and identify evoked spikes. The effectiveness of our methods is demonstrated in both real and simulated 512-electrode recordings in the peripheral primate retina with single-electrode and several types of multi-electrode stimulation. We establish small error rates in the identification of evoked spikes, with a computational complexity that is compatible with real-time data analysis. This technology may be helpful in the design of future high-resolution sensory prostheses based on tailored stimulation (e.g., retinal prostheses), and for closed-loop neural stimulation at a much larger scale than currently possible. PMID:29131818
Mena, Gonzalo E; Grosberg, Lauren E; Madugula, Sasidhar; Hottowy, Paweł; Litke, Alan; Cunningham, John; Chichilnisky, E J; Paninski, Liam
2017-11-01
Simultaneous electrical stimulation and recording using multi-electrode arrays can provide a valuable technique for studying circuit connectivity and engineering neural interfaces. However, interpreting these measurements is challenging because the spike sorting process (identifying and segregating action potentials arising from different neurons) is greatly complicated by electrical stimulation artifacts across the array, which can exhibit complex and nonlinear waveforms, and overlap temporarily with evoked spikes. Here we develop a scalable algorithm based on a structured Gaussian Process model to estimate the artifact and identify evoked spikes. The effectiveness of our methods is demonstrated in both real and simulated 512-electrode recordings in the peripheral primate retina with single-electrode and several types of multi-electrode stimulation. We establish small error rates in the identification of evoked spikes, with a computational complexity that is compatible with real-time data analysis. This technology may be helpful in the design of future high-resolution sensory prostheses based on tailored stimulation (e.g., retinal prostheses), and for closed-loop neural stimulation at a much larger scale than currently possible.
Liu, Jinjun; Leng, Yonggang; Lai, Zhihui; Fan, Shengbo
2018-04-25
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method.
Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.
Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569
Soft X-ray excess in the Coma cluster from a Cosmic Axion Background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angus, Stephen; Conlon, Joseph P.; Marsh, M.C. David
2014-09-01
We show that the soft X-ray excess in the Coma cluster can be explained by a cosmic background of relativistic axion-like particles (ALPs) converting into photons in the cluster magnetic field. We provide a detailed self-contained review of the cluster soft X-ray excess, the proposed astrophysical explanations and the problems they face, and explain how a 0.1- 1 keV axion background naturally arises at reheating in many string theory models of the early universe. We study the morphology of the soft excess by numerically propagating axions through stochastic, multi-scale magnetic field models that are consistent with observations of Faraday rotation measuresmore » from Coma. By comparing to ROSAT observations of the 0.2- 0.4 keV soft excess, we find that the overall excess luminosity is easily reproduced for g{sub aγγ} ∼ 2 × 10{sup -13} Ge {sup -1}. The resulting morphology is highly sensitive to the magnetic field power spectrum. For Gaussian magnetic field models, the observed soft excess morphology prefers magnetic field spectra with most power in coherence lengths on O(3 kpc) scales over those with most power on O(12 kpc) scales. Within this scenario, we bound the mean energy of the axion background to 50 eV∼< ( E{sub a} ) ∼< 250 eV, the axion mass to m{sub a} ∼< 10{sup -12} eV, and derive a lower bound on the axion-photon coupling g{sub aγγ} ∼> √(0.5/Δ N{sub eff}) 1.4 × 10{sup -13} Ge {sup -1}.« less
Highly Fluorescent Noble Metal Quantum Dots
Zheng, Jie; Nicovich, Philip R.; Dickson, Robert M.
2009-01-01
Highly fluorescent, water-soluble, few-atom noble metal quantum dots have been created that behave as multi-electron artificial atoms with discrete, size-tunable electronic transitions throughout the visible and near IR. These “molecular metals” exhibit highly polarizable transitions and scale in size according to the simple relation, Efermi/N1/3, predicted by the free electron model of metallic behavior. This simple scaling indicates that fluorescence arises from intraband transitions of free electrons and that these conduction electron transitions are the low number limit of the plasmon – the collective dipole oscillations occurring when a continuous density of states is reached. Providing the “missing link” between atomic and nanoparticle behavior in noble metals, these emissive, water-soluble Au nanoclusters open new opportunities for biological labels, energy transfer pairs, and light emitting sources in nanoscale optoelectronics. PMID:17105412
Globalisation, Mergers and "Inadvertent Multi-Campus Universities": Reflections from Wales
ERIC Educational Resources Information Center
Zeeman, Nadine; Benneworth, Paul
2017-01-01
Multi-site universities face the challenge of integrating campuses that may have different profiles and orientations arising from place-specific attachments. Multi-campus universities created via mergers seeking to ensure long-term financial sustainability, and increasing their attractiveness to students, create a tension in campuses' purposes. We…
NASA Astrophysics Data System (ADS)
Donner, R. V.; Potirakis, S. M.; Barbosa, S. M.; Matos, J. A. O.; Pereira, A. J. S. C.; Neves, L. J. P. F.
2015-05-01
The presence or absence of long-range correlations in the environmental radioactivity fluctuations has recently attracted considerable interest. Among a multiplicity of practically relevant applications, identifying and disentangling the environmental factors controlling the variable concentrations of the radioactive noble gas radon is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we present a critical re-assessment of a multiplicity of complementary methods that have been previously applied for evaluating the presence of long-range correlations and fractal scaling in environmental radon variations with a particular focus on the specific properties of the underlying time series. As an illustrative case study, we subsequently re-analyze two high-frequency records of indoor radon concentrations from Coimbra, Portugal, each of which spans several weeks of continuous measurements at a high temporal resolution of five minutes.Our results reveal that at the study site, radon concentrations exhibit complex multi-scale dynamics with qualitatively different properties at different time-scales: (i) essentially white noise in the high-frequency part (up to time-scales of about one hour), (ii) spurious indications of a non-stationary, apparently long-range correlated process (at time scales between some hours and one day) arising from marked periodic components, and (iii) low-frequency variability indicating a true long-range dependent process. In the presence of such multi-scale variability, common estimators of long-range memory in time series are prone to fail if applied to the raw data without previous separation of time-scales with qualitatively different dynamics.
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.; Miki, K.; Uzawa, K.
2006-11-30
During the past years the understanding of the multi scale interaction problems have increased significantly. However, at present there exists a flora of different analytical models for investigating multi scale interactions and hardly any specific comparisons have been performed among these models. In this work two different models for the generation of zonal flows from ion-temperature-gradient (ITG) background turbulence are discussed and compared. The methods used are the coherent mode coupling model and the wave kinetic equation model (WKE). It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference ismore » used in the (WKE) approach.« less
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
Data mining on long-term barometric data within the ARISE2 project
NASA Astrophysics Data System (ADS)
Hupe, Patrick; Ceranna, Lars; Pilger, Christoph
2016-04-01
The Comprehensive nuclear-Test-Ban Treaty (CTBT) led to the implementation of an international infrasound array network. The International Monitoring System (IMS) network includes 48 certified stations, each providing data for up to 15 years. As part of work package 3 of the ARISE2 project (Atmospheric dynamics Research InfraStructure in Europe, phase 2) the data sets will be statistically evaluated with regard on atmospheric dynamics. The current study focusses on fluctuations of absolute air pressure. Time series have been analysed for 17 monitoring stations which are located all over the world between Greenland and Antarctica along the latitudes to represent different climate zones and characteristic atmospheric conditions. Hence this enables quantitative comparisons between those regions. Analyses are shown including wavelet power spectra, multi-annual time series of average variances with regard to long-wave scales, and spectral densities to derive characteristics and special events. Evaluations reveal periodicities in average variances on 2 to 20 day scale with a maximum in the winter months and a minimum in summer of the respective hemisphere. This basically applies to time series of IMS stations beyond the tropics where the dominance of cyclones and anticyclones changes with seasons. Furthermore, spectral density analyses illustrate striking signals for several dynamic activities within one day, e.g., the semidiurnal tide.
NASA Astrophysics Data System (ADS)
Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki
2010-12-01
We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.
Guidance and control strategies for aerospace vehicles
NASA Technical Reports Server (NTRS)
Naidu, Desineni S.; Hibey, Joseph L.
1989-01-01
The optimal control problem arising in coplanar orbital transfer employing aeroassist technology and the fuel-optimal control problem arising in orbital transfer vehicles employing aeroassist technology are addressed.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
New convergence results for the scaled gradient projection method
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2015-09-01
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
NASA Astrophysics Data System (ADS)
Kou, Jisheng; Sun, Shuyu
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng-Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from the microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young-Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young-Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young-Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.
Multi-scale diffuse interface modeling of multi-component two-phase flow with partial miscibility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, Jisheng; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049
2016-08-01
In this paper, we introduce a diffuse interface model to simulate multi-component two-phase flow with partial miscibility based on a realistic equation of state (e.g. Peng–Robinson equation of state). Because of partial miscibility, thermodynamic relations are used to model not only interfacial properties but also bulk properties, including density, composition, pressure, and realistic viscosity. As far as we know, this effort is the first time to use diffuse interface modeling based on equation of state for modeling of multi-component two-phase flow with partial miscibility. In numerical simulation, the key issue is to resolve the high contrast of scales from themore » microscopic interface composition to macroscale bulk fluid motion since the interface has a nanoscale thickness only. To efficiently solve this challenging problem, we develop a multi-scale simulation method. At the microscopic scale, we deduce a reduced interfacial equation under reasonable assumptions, and then we propose a formulation of capillary pressure, which is consistent with macroscale flow equations. Moreover, we show that Young–Laplace equation is an approximation of this capillarity formulation, and this formulation is also consistent with the concept of Tolman length, which is a correction of Young–Laplace equation. At the macroscopical scale, the interfaces are treated as discontinuous surfaces separating two phases of fluids. Our approach differs from conventional sharp-interface two-phase flow model in that we use the capillary pressure directly instead of a combination of surface tension and Young–Laplace equation because capillarity can be calculated from our proposed capillarity formulation. A compatible condition is also derived for the pressure in flow equations. Furthermore, based on the proposed capillarity formulation, we design an efficient numerical method for directly computing the capillary pressure between two fluids composed of multiple components. Finally, numerical tests are carried out to verify the effectiveness of the proposed multi-scale method.« less
Human connectome module pattern detection using a new multi-graph MinMax cut model.
De, Wang; Wang, Yang; Nie, Feiping; Yan, Jingwen; Cai, Weidong; Saykin, Andrew J; Shen, Li; Huang, Heng
2014-01-01
Many recent scientific efforts have been devoted to constructing the human connectome using Diffusion Tensor Imaging (DTI) data for understanding the large-scale brain networks that underlie higher-level cognition in human. However, suitable computational network analysis tools are still lacking in human connectome research. To address this problem, we propose a novel multi-graph min-max cut model to detect the consistent network modules from the brain connectivity networks of all studied subjects. A new multi-graph MinMax cut model is introduced to solve this challenging computational neuroscience problem and the efficient optimization algorithm is derived. In the identified connectome module patterns, each network module shows similar connectivity patterns in all subjects, which potentially associate to specific brain functions shared by all subjects. We validate our method by analyzing the weighted fiber connectivity networks. The promising empirical results demonstrate the effectiveness of our method.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946
Weisgerber, D W; Erning, K; Flanagan, C L; Hollister, S J; Harley, B A C
2016-08-01
A particular challenge in biomaterial development for treating orthopedic injuries stems from the need to balance bioactive design criteria with the mechanical and geometric constraints governed by the physiological wound environment. Such trade-offs are of particular importance in large craniofacial bone defects which arise from both acute trauma and chronic conditions. Ongoing efforts in our laboratory have demonstrated a mineralized collagen biomaterial that can promote human mesenchymal stem cell osteogenesis in the absence of osteogenic media but that possesses suboptimal mechanical properties in regards to use in loaded wound sites. Here we demonstrate a multi-scale composite consisting of a highly bioactive mineralized collagen-glycosaminoglycan scaffold with micron-scale porosity and a polycaprolactone support frame (PCL) with millimeter-scale porosity. Fabrication of the composite was performed by impregnating the PCL support frame with the mineral scaffold precursor suspension prior to lyophilization. Here we evaluate the mechanical properties, permeability, and bioactivity of the resulting composite. Results indicated that the PCL support frame dominates the bulk mechanical response of the composite resulting in a 6000-fold increase in modulus compared to the mineral scaffold alone. Similarly, the incorporation of the mineral scaffold matrix into the composite resulted in a higher specific surface area compared to the PCL frame alone. The increased specific surface area in the collagen-PCL composite promoted increased initial attachment of porcine adipose derived stem cells versus the PCL construct. Copyright © 2016 Elsevier Ltd. All rights reserved.
Predicting the cosmological constant with the scale-factor cutoff measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Simone, Andrea; Guth, Alan H.; Salem, Michael P.
2008-09-15
It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less
NASA Technical Reports Server (NTRS)
Martin, William G.; Cairns, Brian; Bal, Guillaume
2014-01-01
This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.
[Continuity and discontinuity of the geomerida: the bionomic and biotic aspects].
Kafanov, A I
2005-01-01
The view of the spatial structure of the geomerida (Earth's life cover) as a continuum that prevails in modern phytocoenology is mostly determined by a physiognomic (landscape-bionomic) discrimination of vegetation components. In this connection, geography of life forms appears as subject of the landscapebionomic biogeography. In zoocoenology there is a tendency of synthesis of alternative concepts based on the assumption that there are no absolute continuum and absolute discontinuum in the organic nature. The problem of continuum and discontinuum of living cover being problem of scale aries from fractal structure of geomerida. This problem arises from fractal nature of the spatial structure of geomerida. The continuum mainly belongs to regularities of topological order. At regional and subregional scale the continuum of biochores is rather rare. The objective evidences of relative discontinuity of the living cover are determined by significant alterations of species diversity at the regional, subregional and even topological scale Alternatively to conventionally discriminated units in physionomically continuous vegetation, the same biotic complexes, represented as operational units of biogeographical and biocenological zoning, are distinguished repeatedly and independently by different researchers. An area occupied by certain flora (fauna, biota) could be considered as elementary unit of biotic diversity (elementary biotic complex).
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver
NASA Astrophysics Data System (ADS)
Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.
2011-11-01
FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.
Streibelt, M; Gerwinn, H; Hansmeier, T; Thren, K; Müller-Fahrnow, W
2007-10-01
For a number of years, work-related interventions in medical rehabilitation (MBO) have been developed. Basically, these interventions concentrate on vocational problems of rehabilitees whose health disorders are strongly associated with contextual factors of the environment as well as personal factors. Previous studies showed a close relationship between the success of an intervention and identification of a specific demand. In fact there are several clinical concepts regarding specific demand. But there still is a lack of appropriate instruments for use in identification of occupational challenges. Therefore SIMBO (Screening Instrument for Identification of a Demand for Medical-Vocational Oriented Rehabilitation) has been developed recently. By using a scale for the intensity of work-related problems as well as a cut-off point, SIMBO is able to identify patients with and without a demand for work-related interventions. Analyses relative to construct validity and predictive validity were carried out on two different samples--a multi-clinic sample (patients with musculoskeletal disorders) and a sample from the German statutory pension insurance agency DRV Westfalen (successful applications for medical rehabilitation). In this context the cut-off level discussion is very important. By means of the multi-clinic sample--irrespective of cut-off definition--the SIMBO-decision and the clinical identification of MBO-demand were found to agree in 74-78% of the cases. This corresponds to a maximum adjusted correlation of r=0.59 (phi coefficient). Compared to the external ratings of vocational problems given by DRV staff in handling the applications, however, only little agreement is found (64%, r=0.25). In fact, SIMBO had in 77% (r=0.50) of the cases been able to correctly predict work-related problems to be expected. So the result obtained using this instrument is far better than prediction of these problems in the external ratings by DRV staff (54%, r=0,21). Also, return to work (RTW) in good health after six months can be predicted correctly by SIMBO in 77% of the cases. This means that the probability of RTW in good health is reduced by 90% (Odds Ratio=0.1) if work-related problems had been identified by SIMBO. Concerning its clinical as well as predictive quality, the validity of SIMBO-based ratings of work-related problems has been proven. Further, it has become obvious that SIMBO is suitable as an easy-to-handle tool for identification of a need for vocationally-focused interventions for use by the social insurance agencies which finance rehabilitation. Further interesting questions arise relative to application in different indications as well as potential uses as an outcome instrument.
A novel fruit shape classification method based on multi-scale analysis
NASA Astrophysics Data System (ADS)
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared tomore » MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.« less
A complex multi-notch astronomical filter to suppress the bright infrared sky.
Bland-Hawthorn, J; Ellis, S C; Leon-Saval, S G; Haynes, R; Roth, M M; Löhmannsröben, H-G; Horton, A J; Cuby, J-G; Birks, T A; Lawrence, J S; Gillingham, P; Ryder, S D; Trinh, C
2011-12-06
A long-standing and profound problem in astronomy is the difficulty in obtaining deep near-infrared observations due to the extreme brightness and variability of the night sky at these wavelengths. A solution to this problem is crucial if we are to obtain the deepest possible observations of the early Universe, as redshifted starlight from distant galaxies appears at these wavelengths. The atmospheric emission between 1,000 and 1,800 nm arises almost entirely from a forest of extremely bright, very narrow hydroxyl emission lines that varies on timescales of minutes. The astronomical community has long envisaged the prospect of selectively removing these lines, while retaining high throughput between them. Here we demonstrate such a filter for the first time, presenting results from the first on-sky tests. Its use on current 8 m telescopes and future 30 m telescopes will open up many new research avenues in the years to come.
A New Time-varying Concept of Risk in a Changing Climate.
Sarhadi, Ali; Ausín, María Concepción; Wiper, Michael P
2016-10-20
In a changing climate arising from anthropogenic global warming, the nature of extreme climatic events is changing over time. Existing analytical stationary-based risk methods, however, assume multi-dimensional extreme climate phenomena will not significantly vary over time. To strengthen the reliability of infrastructure designs and the management of water systems in the changing environment, multidimensional stationary risk studies should be replaced with a new adaptive perspective. The results of a comparison indicate that current multi-dimensional stationary risk frameworks are no longer applicable to projecting the changing behaviour of multi-dimensional extreme climate processes. Using static stationary-based multivariate risk methods may lead to undesirable consequences in designing water system infrastructures. The static stationary concept should be replaced with a flexible multi-dimensional time-varying risk framework. The present study introduces a new multi-dimensional time-varying risk concept to be incorporated in updating infrastructure design strategies under changing environments arising from human-induced climate change. The proposed generalized time-varying risk concept can be applied for all stochastic multi-dimensional systems that are under the influence of changing environments.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu
2017-04-01
In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less
Conflict resolution in multi-agent hybrid systems
DOT National Transportation Integrated Search
1996-12-01
A conflict resolution architecture for multi-agent hybrid systems with emphasis on Air Traffic Management Systems (ATMS) is presented. In such systems, conflicts arise in the form of potential collisions which are resolved locally by inter-agent coor...
Pattern-oriented modelling: a ‘multi-scope’ for predictive systems ecology
Grimm, Volker; Railsback, Steven F.
2012-01-01
Modern ecology recognizes that modelling systems across scales and at multiple levels—especially to link population and ecosystem dynamics to individual adaptive behaviour—is essential for making the science predictive. ‘Pattern-oriented modelling’ (POM) is a strategy for doing just this. POM is the multi-criteria design, selection and calibration of models of complex systems. POM starts with identifying a set of patterns observed at multiple scales and levels that characterize a system with respect to the particular problem being modelled; a model from which the patterns emerge should contain the right mechanisms to address the problem. These patterns are then used to (i) determine what scales, entities, variables and processes the model needs, (ii) test and select submodels to represent key low-level processes such as adaptive behaviour, and (iii) find useful parameter values during calibration. Patterns are already often used in these ways, but a mini-review of applications of POM confirms that making the selection and use of patterns more explicit and rigorous can facilitate the development of models with the right level of complexity to understand ecological systems and predict their response to novel conditions. PMID:22144392
Designing and Evaluating Participatory Cyber-Infrastructure Systems for Multi-Scale Citizen Science
ERIC Educational Resources Information Center
Newman, Gregory J.
2010-01-01
Widespread and continuous spatial and temporal environmental data is essential for effective environmental monitoring, sustainable natural resource management, and ecologically responsible decisions. Our environmental monitoring, data management and reporting enterprise is not matched to current problems, concerns, and decision-making needs.…
NASA Astrophysics Data System (ADS)
Zhu, H.; Zhao, H. L.; Jiang, Y. Z.; Zang, W. B.
2018-05-01
Soil moisture is one of the important hydrological elements. Obtaining soil moisture accurately and effectively is of great significance for water resource management in irrigation area. During the process of soil moisture content retrieval with multiremote sensing data, multi- remote sensing data always brings multi-spatial scale problems which results in inconformity of soil moisture content retrieved by remote sensing in different spatial scale. In addition, agricultural water use management has suitable spatial scale of soil moisture information so as to satisfy the demands of dynamic management of water use and water demand in certain unit. We have proposed to use land parcel unit as the minimum unit to do soil moisture content research in agricultural water using area, according to soil characteristics, vegetation coverage characteristics in underlying layer, and hydrological characteristic into the basis of study unit division. We have proposed division method of land parcel units. Based on multi thermal infrared and near infrared remote sensing data, we calculate the ndvi and tvdi index and make a statistical model between the tvdi index and soil moisture of ground monitoring station. Then we move forward to study soil moisture remote sensing retrieval method on land parcel unit scale. And the method has been applied in Hetao irrigation area. Results show that compared with pixel scale the soil moisture content in land parcel unit scale has displayed stronger correlation with true value. Hence, remote sensing retrieval method of soil moisture content in land parcel unit scale has shown good applicability in Hetao irrigation area. We converted the research unit into the scale of land parcel unit. Using the land parcel units with unified crops and soil attributes as the research units more complies with the characteristics of agricultural water areas, avoids the problems such as decomposition of mixed pixels and excessive dependence on high-resolution data caused by the research units of pixels, and doesn't involve compromises in the spatial scale and simulating precision like the grid simulation. When the application needs are met, the production efficiency of products can also be improved at a certain degree.
Homogenization techniques for population dynamics in strongly heterogeneous landscapes.
Yurk, Brian P; Cobbold, Christina A
2018-12-01
An important problem in spatial ecology is to understand how population-scale patterns emerge from individual-level birth, death, and movement processes. These processes, which depend on local landscape characteristics, vary spatially and may exhibit sharp transitions through behavioural responses to habitat edges, leading to discontinuous population densities. Such systems can be modelled using reaction-diffusion equations with interface conditions that capture local behaviour at patch boundaries. In this work we develop a novel homogenization technique to approximate the large-scale dynamics of the system. We illustrate our approach, which also generalizes to multiple species, with an example of logistic growth within a periodic environment. We find that population persistence and the large-scale population carrying capacity is influenced by patch residence times that depend on patch preference, as well as movement rates in adjacent patches. The forms of the homogenized coefficients yield key theoretical insights into how large-scale dynamics arise from the small-scale features.
NASA Astrophysics Data System (ADS)
Illangasekare, T. H.; Sakaki, T.; Smits, K. M.; Limsuwat, A.; Terrés-Nícoli, J. M.
2008-12-01
Understanding the dynamics of soil moisture distribution near the ground surface is of interest in various applications involving land-atmospheric interaction, evaporation from soils, CO2 leakage from carbon sequestration, vapor intrusion into buildings, and land mine detection. Natural soil heterogeneity in combination with water and energy fluxes at the soil surface creates complex spatial and temporal distributions of soil moisture. Even though considerable knowledge exists on how soil moisture conditions change in response to flux and energy boundary conditions, emerging problems involving land atmospheric interactions require the quantification of soil moisture variability both at high spatial and temporal resolutions. The issue of up-scaling becomes critical in all applications, as in general, field measurements are taken at sparsely distributed spatial locations that require assimilation with measurements taken using remote sensing technologies. It is our contention that the knowledge that will contribute to both improving our understanding of the fundamental processes and practical problem solution cannot be obtained easily in the field due to a number of constraints. One of these basic constraints is the inability to make measurements at very fine spatial scales at high temporal resolutions in naturally heterogeneous field systems. Also, as the natural boundary conditions at the land/atmospheric interface are not controllable in the field, even in pilot scale studies, the developed theories and tools cannot be validated for the diversity of conditions that could be expected in the field. Intermediate scale testing using soil tanks packed to represent different heterogeneous test configurations provides an attractive and cost effective alternative to investigate a class of problems involving the shallow unsaturated zone. In this presentation, we will discuss the advantages and limitations of studies conducted in both two and three dimensional intermediate scale test systems together with instrumentation and measuring techniques. The features and capabilities of a new coupled porous media/climate wind tunnel test system that allows for the study of near surface unsaturated soil moisture conditions under climate boundary conditions will also be presented with the goal of exploring opportunities to use such a facility to study some of the multi-scale problems in the near surface unsaturated zone.
Bayesian Hierarchical Modeling for Big Data Fusion in Soil Hydrology
NASA Astrophysics Data System (ADS)
Mohanty, B.; Kathuria, D.; Katzfuss, M.
2016-12-01
Soil moisture datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors on the other hand provide observations on a finer spatial scale (meter scale or less) but are sparsely available. Soil moisture is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables. Hydrologic processes usually occur at a scale of 1 km or less and therefore spatially ubiquitous and temporally periodic soil moisture products at this scale are required to aid local decision makers in agriculture, weather prediction and reservoir operations. Past literature has largely focused on downscaling RS soil moisture for a small extent of a field or a watershed and hence the applicability of such products has been limited. The present study employs a spatial Bayesian Hierarchical Model (BHM) to derive soil moisture products at a spatial scale of 1 km for the state of Oklahoma by fusing point scale Mesonet data and coarse scale RS data for soil moisture and its auxiliary covariates such as precipitation, topography, soil texture and vegetation. It is seen that the BHM model handles change of support problems easily while performing accurate uncertainty quantification arising from measurement errors and imperfect retrieval algorithms. The computational challenge arising due to the large number of measurements is tackled by utilizing basis function approaches and likelihood approximations. The BHM model can be considered as a complex Bayesian extension of traditional geostatistical prediction methods (such as Kriging) for large datasets in the presence of uncertainties.
Sibole, Scott C.; Erdemir, Ahmet
2012-01-01
Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro-scale model providing application for other multi-scale continuum mechanics problems. PMID:22649535
Human-Robot Teaming in a Multi-Agent Space Assembly Task
NASA Technical Reports Server (NTRS)
Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher
2004-01-01
NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.
A detail-preserved and luminance-consistent multi-exposure image fusion algorithm
NASA Astrophysics Data System (ADS)
Wang, Guanquan; Zhou, Yue
2018-04-01
When irradiance across a scene varies greatly, we can hardly get an image of the scene without over- or underexposure area, because of the constraints of cameras. Multi-exposure image fusion (MEF) is an effective method to deal with this problem by fusing multi-exposure images of a static scene. A novel MEF method is described in this paper. In the proposed algorithm, coarser-scale luminance consistency is preserved by contribution adjustment using the luminance information between blocks; detail-preserved smoothing filter can stitch blocks smoothly without losing details. Experiment results show that the proposed method performs well in preserving luminance consistency and details.
Inequalities, Assessment and Computer Algebra
ERIC Educational Resources Information Center
Sangwin, Christopher J.
2015-01-01
The goal of this paper is to examine single variable real inequalities that arise as tutorial problems and to examine the extent to which current computer algebra systems (CAS) can (1) automatically solve such problems and (2) determine whether students' own answers to such problems are correct. We review how inequalities arise in contemporary…
2015-12-02
simplification of the equations but at the expense of introducing modeling errors. We have shown that the Wick solutions have accuracy comparable to...the system of equations for the coefficients of formal power series solutions . Moreover, the structure of this propagator is seemingly universal, i.e...the problem of computing the numerical solution to kinetic partial differential equa- tions involving many phase variables. These types of equations
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-01-01
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-02-07
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Muszalik, Marta; Dijkstra, Ate; Kędziora-Kornatowska, Kornelia; Zielińska-Więczkowska, Halina
2012-01-01
Elderly population is characterized by larger need for social welfare and medical treatment than other age groups. Along with aging, there is a number of emerging health, nursing, caring, psychological and social problems. Complexity of these problems results from overlapping and advancing involutional changes, multi-illness, decreased functional efficiency and other factors. The aim of the study was the assessment of health problems in geriatric patients as well as bio-psycho-social need deficiencies in a view of selected parameters of functional efficiency. The research group consisted of the Chair and Clinic of Geriatrics, 186 women and 114 men, 300 persons in total. The research was carried out using a diagnostic poll method with the application of the Activities of Daily Living (ADL) questionnaire of assessment of daily efficiency on the basis of the Katz Scale; the Care Dependency Scale (CDS) questionnaire used to measure the level of the care dependency and human needs, Norton's bed sores risk assessment scale, the Nursing Care Category (NCC) questionnaire applied to assess the need for nursing care. In most patients the results unveiled manifestations of three or more illnesses. Functional efficiency was at low and average level. Half of the subjects were endangered by risk of bed sores as well as showed high need fulfillment deficiency. The highest level of the deficiency was observed in patients in the eldest age group as well as suffering from multi-illness. Material status, education, place of residence or gender showed no significant influence on the level of need fulfillment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Sampling problems: The small scale structure of precipitation
NASA Technical Reports Server (NTRS)
Crane, R. K.
1981-01-01
The quantitative measurement of precipitation characteristics for any area on the surface of the Earth is not an easy task. Precipitation is rather variable in both space and time, and the distribution of surface rainfall data given location typically is substantially skewed. There are a number of precipitation process at work in the atmosphere, and few of them are well understood. The formal theory on sampling and estimating precipitation appears considerably deficient. Little systematic attention is given to nonsampling errors that always arise in utilizing any measurement system. Although the precipitation measurement problem is an old one, it continues to be one that is in need of systematic and careful attention. A brief history of the presently competing measurement technologies should aid us in understanding the problem inherent in this measurement task.
A Parallel Finite Set Statistical Simulator for Multi-Target Detection and Tracking
NASA Astrophysics Data System (ADS)
Hussein, I.; MacMillan, R.
2014-09-01
Finite Set Statistics (FISST) is a powerful Bayesian inference tool for the joint detection, classification and tracking of multi-target environments. FISST is capable of handling phenomena such as clutter, misdetections, and target birth and decay. Implicit within the approach are solutions to the data association and target label-tracking problems. Finally, FISST provides generalized information measures that can be used for sensor allocation across different types of tasks such as: searching for new targets, and classification and tracking of known targets. These FISST capabilities have been demonstrated on several small-scale illustrative examples. However, for implementation in a large-scale system as in the Space Situational Awareness problem, these capabilities require a lot of computational power. In this paper, we implement FISST in a parallel environment for the joint detection and tracking of multi-target systems. In this implementation, false alarms and misdetections will be modeled. Target birth and decay will not be modeled in the present paper. We will demonstrate the success of the method for as many targets as we possibly can in a desktop parallel environment. Performance measures will include: number of targets in the simulation, certainty of detected target tracks, computational time as a function of clutter returns and number of targets, among other factors.
Adaptive consensus of scale-free multi-agent system by randomly selecting links
NASA Astrophysics Data System (ADS)
Mou, Jinping; Ge, Huafeng
2016-06-01
This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
NASA Astrophysics Data System (ADS)
Hunt, Peter A.; Segall, Matthew D.; Tyzack, Jonathan D.
2018-02-01
In the development of novel pharmaceuticals, the knowledge of how many, and which, Cytochrome P450 isoforms are involved in the phase I metabolism of a compound is important. Potential problems can arise if a compound is metabolised predominantly by a single isoform in terms of drug-drug interactions or genetic polymorphisms that would lead to variations in exposure in the general population. Combined with models of regioselectivities of metabolism by each isoform, such a model would also aid in the prediction of the metabolites likely to be formed by P450-mediated metabolism. We describe the generation of a multi-class random forest model to predict which, out of a list of the seven leading Cytochrome P450 isoforms, would be the major metabolising isoforms for a novel compound. The model has a 76% success rate with a top-1 criterion and an 88% success rate for a top-2 criterion and shows significant enrichment over randomised models.
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle
Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro
2018-01-01
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310
Reasoning about real-time systems with temporal interval logic constraints on multi-state automata
NASA Technical Reports Server (NTRS)
Gabrielian, Armen
1991-01-01
Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.
Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier
2018-01-02
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.
Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture
NASA Astrophysics Data System (ADS)
Hassan, Ezeldin A.
Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.
2015-09-30
Meneveau, C., and L. Shen (2014), Large-eddy simulation of offshore wind farm , Physics of Fluids, 26, 025101. Zhang, Z., Fringer, O.B., and S.R...being centimeter scale, surface mixed layer processes arising from the combined actions of tides, winds and mesoscale currents. Issues related to...the internal wave field and how it impacts the surface waves. APPROACH We are focusing on the problem of modification of the wind -wave field
Glimpse: Sparsity based weak lensing mass-mapping tool
NASA Astrophysics Data System (ADS)
Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.
2018-02-01
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
NASA Astrophysics Data System (ADS)
Yeung, Chi Ho
In this thesis, we study two interdisciplinary problems in the framework of statistical physics, which show the broad applicability of physics on problems with various origins. The first problem corresponds to an optimization problem in allocating resources on random regular networks. Frustrations arise from competition for resources. When the initial resources are uniform, different regimes with discrete fractions of satisfied nodes are observed, resembling the Devil's staircase. We apply the spin glass theory in analyses and demonstrate how functional recursions are converted to simple recursions of probabilities. Equilibrium properties such as the average energy and the fraction of free nodes are derived. When the initial resources are bimodally distributed, increases in the fraction of rich nodes induce a glassy transition, entering a glassy phase described by the existence of multiple metastable states, in which we employ the replica symmetry breaking ansatz for analysis. The second problem corresponds to the study of multi-agent systems modeling financial markets. Agents in the system trade among themselves, and self-organize to produce macroscopic trading behaviors resembling the real financial markets. These behaviors include the arbitraging activities, the setting up and the following of price trends. A phase diagram of these behaviors is obtained, as a function of the sensitivity of price and the market impact factor. We finally test the applicability of the models with real financial data including the Hang Seng Index, the Nasdaq Composite and the Dow Jones Industrial Average. A substantial fraction of agents gains faster than the inflation rate of the indices, suggesting the possibility of using multi-agent systems as a tool for real trading.
Simulations of Turbulent Flows with Strong Shocks and Density Variations: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanjiva Lele
2012-10-01
The target of this SciDAC Science Application was to develop a new capability based on high-order and high-resolution schemes to simulate shock-turbulence interactions and multi-material mixing in planar and spherical geometries, and to study Rayleigh-Taylor and Richtmyer-Meshkov turbulent mixing. These fundamental problems have direct application in high-speed engineering flows, such as inertial confinement fusion (ICF) capsule implosions and scramjet combustion, and also in the natural occurrence of supernovae explosions. Another component of this project was the development of subgrid-scale (SGS) models for large-eddy simulations of flows involving shock-turbulence interaction and multi-material mixing, that were to be validated with the DNSmore » databases generated during the program. The numerical codes developed are designed for massively-parallel computer architectures, ensuring good scaling performance. Their algorithms were validated by means of a sequence of benchmark problems. The original multi-stage plan for this five-year project included the following milestones: 1) refinement of numerical algorithms for application to the shock-turbulence interaction problem and multi-material mixing (years 1-2); 2) direct numerical simulations (DNS) of canonical shock-turbulence interaction (years 2-3), targeted at improving our understanding of the physics behind the combined two phenomena and also at guiding the development of SGS models; 3) large-eddy simulations (LES) of shock-turbulence interaction (years 3-5), improving SGS models based on the DNS obtained in the previous phase; 4) DNS of planar/spherical RM multi-material mixing (years 3-5), also with the two-fold objective of gaining insight into the relevant physics of this instability and aiding in devising new modeling strategies for multi-material mixing; 5) LES of planar/spherical RM mixing (years 4-5), integrating the improved SGS and multi-material models developed in stages 3 and 5. This final report is outlined as follows. Section 2 shows an assessment of numerical algorithms that are best suited for the numerical simulation of compressible flows involving turbulence and shock phenomena. Sections 3 and 4 deal with the canonical shock-turbulence interaction problem, from the DNS and LES perspectives, respectively. Section 5 considers the shock-turbulence inter-action in spherical geometry, in particular, the interaction of a converging shock with isotropic turbulence as well as the problem of the blast wave. Section 6 describes the study of shock-accelerated mixing through planar and spherical Richtmyer-Meshkov mixing as well as the shock-curtain interaction problem In section 7 we acknowledge the different interactions between Stanford and other institutions participating in this SciDAC project, as well as several external collaborations made possible through it. Section 8 presents a list of publications and presentations that have been generated during the course of this SciDAC project. Finally, section 9 concludes this report with the list of personnel at Stanford University funded by this SciDAC project.« less
NASA Astrophysics Data System (ADS)
Bocian, M.; Brownjohn, J. M. W.; Racic, V.; Hester, D.; Quattrone, A.; Gilbert, L.; Beasley, R.
2018-05-01
A multi-scale and multi-object interaction phenomena can arise when a group of walking pedestrians crosses a structure capable of exhibiting dynamic response. This is because each pedestrian is an autonomous dynamic system capable of displaying intricate behaviour affected by social, psychological, biomechanical and environmental factors, including adaptations to the structural motion. Despite a wealth of mathematical models attempting to describe and simulate coupled crowd-structure system, their applicability can generally be considered uncertain. This can be assigned to a number of assumptions made in their development and the scarcity or unavailability of data suitable for their validation, in particular those associated with pedestrian-pedestrian and pedestrian-structure interaction. To alleviate this problem, data on behaviour of individual pedestrians within groups of six walkers with different spatial arrangements are gathered simultaneously with data on dynamic structural response of a footbridge, from a series of measurements utilising wireless motion monitors. Unlike in previous studies on coordination of pedestrian behaviour, the collected data can serve as a proxy for pedestrian vertical force, which is of critical importance from the point of view of structural stability. A bivariate analysis framework is proposed and applied to these data, encompassing wavelet transform, synchronisation measures based on Shannon entropy and circular statistics. A topological pedestrian map is contrived showing the strength and directionality of between-subjects interactions. It is found that the coordination in pedestrians' vertical force depends on the spatial collocation within a group, but it is generally weak. The relationship between the bridge and pedestrian behaviour is also analysed, revealing stronger propensity for pedestrians to coordinate their force with the structural motion rather than with each other.
Autonomous mobile robot research using the HERMIES-III robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; Beckerman, M.; Spelt, P.F.
1989-01-01
This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercubemore » configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.« less
Leng, Yonggang; Fan, Shengbo
2018-01-01
Mechanical fault diagnosis usually requires not only identification of the fault characteristic frequency, but also detection of its second and/or higher harmonics. However, it is difficult to detect a multi-frequency fault signal through the existing Stochastic Resonance (SR) methods, because the characteristic frequency of the fault signal as well as its second and higher harmonics frequencies tend to be large parameters. To solve the problem, this paper proposes a multi-frequency signal detection method based on Frequency Exchange and Re-scaling Stochastic Resonance (FERSR). In the method, frequency exchange is implemented using filtering technique and Single SideBand (SSB) modulation. This new method can overcome the limitation of "sampling ratio" which is the ratio of the sampling frequency to the frequency of target signal. It also ensures that the multi-frequency target signals can be processed to meet the small-parameter conditions. Simulation results demonstrate that the method shows good performance for detecting a multi-frequency signal with low sampling ratio. Two practical cases are employed to further validate the effectiveness and applicability of this method. PMID:29693577
NASA Astrophysics Data System (ADS)
Ijjas, Anna; Steinhardt, Paul J.
2015-10-01
We introduce ``anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariant spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.
Optimization of the coherence function estimation for multi-core central processing unit
NASA Astrophysics Data System (ADS)
Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.
2017-02-01
The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.
NASA Astrophysics Data System (ADS)
Hanson, Jeffrey A.; McLaughlin, Keith L.; Sereno, Thomas J.
2011-06-01
We have developed a flexible, target-driven, multi-modal, physics-based fusion architecture that efficiently searches sensor detections for targets and rejects clutter while controlling the combinatoric problems that commonly arise in datadriven fusion systems. The informational constraints imposed by long lifetime requirements make systems vulnerable to false alarms. We demonstrate that our data fusion system significantly reduces false alarms while maintaining high sensitivity to threats. In addition, mission goals can vary substantially in terms of targets-of-interest, required characterization, acceptable latency, and false alarm rates. Our fusion architecture provides the flexibility to match these trade-offs with mission requirements unlike many conventional systems that require significant modifications for each new mission. We illustrate our data fusion performance with case studies that span many of the potential mission scenarios including border surveillance, base security, and infrastructure protection. In these studies, we deployed multi-modal sensor nodes - including geophones, magnetometers, accelerometers and PIR sensors - with low-power processing algorithms and low-bandwidth wireless mesh networking to create networks capable of multi-year operation. The results show our data fusion architecture maintains high sensitivities while suppressing most false alarms for a variety of environments and targets.
Applying multi-resolution numerical methods to geodynamics
NASA Astrophysics Data System (ADS)
Davies, David Rhodri
Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled structured grid solution strategies, the unstructured techniques utilized in 2-D would throw away the regular grid and, with it, the major benefits of the current solution algorithms. Alternative avenues towards multi-resolution must therefore be sought. A non-uniform structured method that produces similar advantages to unstructured grids is introduced here, in the context of the pre-existing 3-D spherical mantle dynamics code, TERRA. The method, based upon the multigrid refinement techniques employed in the field of computational engineering, is used to refine and solve on a radially non-uniform grid. It maintains the key benefits of TERRA's current configuration, whilst also overcoming many of its limitations. Highly efficient solutions to non-uniform problems are obtained. The scheme is highly resourceful in terms RAM, meaning that one can attempt calculations that would otherwise be impractical. In addition, the solution algorithm reduces the CPU-time needed to solve a given problem. Validation tests illustrate that the approach is accurate and robust. Furthermore, by being conceptually simple and straightforward to implement, the method negates the need to reformulate large sections of code. The technique is applied to highly advanced 3-D spherical mantle convection models. Due to its resourcefulness in terms of RAM, the modified code allows one to efficiently resolve thermal boundary layers at the dynamical regime of Earth's mantle. The simulations presented are therefore at superior vigor to the highest attained, to date, in 3-D spherical geometry, achieving Rayleigh numbers of order 109. Upwelling structures are examined, focussing upon the nature of deep mantle plumes. Previous studies have shown long-lived, anchored, coherent upwelling plumes to be a feature of low to moderate vigor convection. Since more vigorous convection traditionally shows greater time-dependence, the fixity of upwellings would not logically be expected for non-layered convection at higher vigors. However, such configurations have recently been observed. With hot-spots widely-regarded as the surface expression of deep mantle plumes, it is of great importance to ascertain whether or not these conclusions are valid at the dynamical regime of Earth's mantle. Results demonstrate that at these high vigors, steady plumes do arise. However, they do not dominate the planform as in lower vigor cases: they coexist with mobile and ephemeral plumes and display ranging characteristics, which are consistent with hot-spot observations on Earth. Those plumes that do remain steady alter in intensity throughout the simulation, strengthening and weakening over time. Such behavior is caused by an irregular supply of cold material to the core-mantle boundary region, suggesting that subducting slabs are partially responsible for episodic plume magmatism on Earth. With this in mind, the influence of the upper boundary condition upon the planform of mantle convection is further examined. With the modified code, the CPU-time needed to solve a given problem is reduced and, hence, several simulations can be run efficiently, allowing a relatively rapid parameter space mapping of various upper boundary conditions. Results, in accordance with the investigations on upwelling structures, demonstrate that the surface exerts a profound control upon internal dynamics, manifesting itself not only in convective structures, but also in thermal profiles, Nusselt numbers and velocity patterns. Since the majority of geodynamical simulations incorporate a surface condition that is not at all representative of Earth, this is a worrying, yet important conclusion. By failing to address the surface appropriately, geodynamical models, regardless of their sophistication, cannot be truly applicable to Earth. In summary, the techniques developed herein, in both 2- and 3-D, are extremely practical and highly efficient, yielding significant advantages for geodynamical simulations. Indeed, they allow one to solve problems that would otherwise be unfeasible.
NASA Astrophysics Data System (ADS)
Vo, Kiet T.; Sowmya, Arcot
A directional multi-scale modeling scheme based on wavelet and contourlet transforms is employed to describe HRCT lung image textures for classifying four diffuse lung disease patterns: normal, emphysema, ground glass opacity (GGO) and honey-combing. Generalized Gaussian density parameters are used to represent the detail sub-band features obtained by wavelet and contourlet transforms. In addition, support vector machines (SVMs) with excellent performance in a variety of pattern classification problems are used as classifier. The method is tested on a collection of 89 slices from 38 patients, each slice of size 512x512, 16 bits/pixel in DICOM format. The dataset contains 70,000 ROIs of those slices marked by experienced radiologists. We employ this technique at different wavelet and contourlet transform scales for diffuse lung disease classification. The technique presented here has best overall sensitivity 93.40% and specificity 98.40%.
Nonlinear and Stochastic Dynamics in the Heart
Qu, Zhilin; Hu, Gang; Garfinkel, Alan; Weiss, James N.
2014-01-01
In a normal human life span, the heart beats about 2 to 3 billion times. Under diseased conditions, a heart may lose its normal rhythm and degenerate suddenly into much faster and irregular rhythms, called arrhythmias, which may lead to sudden death. The transition from a normal rhythm to an arrhythmia is a transition from regular electrical wave conduction to irregular or turbulent wave conduction in the heart, and thus this medical problem is also a problem of physics and mathematics. In the last century, clinical, experimental, and theoretical studies have shown that dynamical theories play fundamental roles in understanding the mechanisms of the genesis of the normal heart rhythm as well as lethal arrhythmias. In this article, we summarize in detail the nonlinear and stochastic dynamics occurring in the heart and their links to normal cardiac functions and arrhythmias, providing a holistic view through integrating dynamics from the molecular (microscopic) scale, to the organelle (mesoscopic) scale, to the cellular, tissue, and organ (macroscopic) scales. We discuss what existing problems and challenges are waiting to be solved and how multi-scale mathematical modeling and nonlinear dynamics may be helpful for solving these problems. PMID:25267872
NASA Astrophysics Data System (ADS)
Price, D. J.; Laibe, G.
2015-10-01
Dust-gas mixtures are the simplest example of a two fluid mixture. We show that when simulating such mixtures with particles or with particles coupled to grids a problem arises due to the need to resolve a very small length scale when the coupling is strong. Since this is occurs in the limit when the fluids are well coupled, we show how the dust-gas equations can be reformulated to describe a single fluid mixture. The equations are similar to the usual fluid equations supplemented by a diffusion equation for the dust-to-gas ratio or alternatively the dust fraction. This solves a number of numerical problems as well as making the physics clear.
Fully implicit adaptive mesh refinement solver for 2D MHD
NASA Astrophysics Data System (ADS)
Philip, B.; Chacon, L.; Pernice, M.
2008-11-01
Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Moss, Becky; Roberts, Celia
2005-08-01
The gap is widening between understanding the subtle ways patients and GPs manage their talk, and superficial discussion of the 'language barrier' among linguistic minority patients. All patients have to explain themselves, not just those for whom English is their first or main language. Patients' explanations reflect how they want the doctor to perceive them as a patient and as a person: they reveal patients' identities. Yet interpretations are not easy when patients' style of talking English is influenced by their first language and cultural background. To explore in detail how patients with limited English and GPs jointly overcome misunderstandings in explanations. Using discourse analysis and conversation analysis, we examine how GPs and their patients with limited English negotiate explanations and collaborate to manage, repair or prevent understanding problems. 31% of patients said English was not their first language. Misunderstandings arise owing to a range of linguistic and cultural factors, including stress and intonation patterns, vocabulary, the way a patient sequences their narrative, and patient and GP pursuing different agendas. When talk itself is the problem, patients' explanations can lead to misunderstandings, which GPs have to repair if they cannot prevent. Careful interpretation by skillful GPs can reveal patients' knowledge, experience and perspective.
About the bears and the bees: Adaptive responses to asymmetric warfare
NASA Astrophysics Data System (ADS)
Ryan, Alex
Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a 'game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.
About the bears and the bees: Adaptive responses to asymmetric warfare
NASA Astrophysics Data System (ADS)
Ryan, Alex
Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a `game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.
NASA Astrophysics Data System (ADS)
Camassa, Roberto; McLaughlin, Richard M.; Viotti, Claudio
2010-11-01
The time evolution of a passive scalar advected by parallel shear flows is studied for a class of rapidly varying initial data. Such situations are of practical importance in a wide range of applications from microfluidics to geophysics. In these contexts, it is well-known that the long-time evolution of the tracer concentration is governed by Taylor's asymptotic theory of dispersion. In contrast, we focus here on the evolution of the tracer at intermediate time scales. We show how intermediate regimes can be identified before Taylor's, and in particular, how the Taylor regime can be delayed indefinitely by properly manufactured initial data. A complete characterization of the sorting of these time scales and their associated spatial structures is presented. These analytical predictions are compared with highly resolved numerical simulations. Specifically, this comparison is carried out for the case of periodic variations in the streamwise direction on the short scale with envelope modulations on the long scales, and show how this structure can lead to "anomalously" diffusive transients in the evolution of the scalar onto the ultimate regime governed by Taylor dispersion. Mathematically, the occurrence of these transients can be viewed as a competition in the asymptotic dominance between large Péclet (Pe) numbers and the long/short scale aspect ratios (LVel/LTracer≡k), two independent nondimensional parameters of the problem. We provide analytical predictions of the associated time scales by a modal analysis of the eigenvalue problem arising in the separation of variables of the governing advection-diffusion equation. The anomalous time scale in the asymptotic limit of large k Pe is derived for the short scale periodic structure of the scalar's initial data, for both exactly solvable cases and in general with WKBJ analysis. In particular, the exactly solvable sawtooth flow is especially important in that it provides a short cut to the exact solution to the eigenvalue problem for the physically relevant vanishing Neumann boundary conditions in linear-shear channel flow. We show that the life of the corresponding modes at large Pe for this case is shorter than the ones arising from shear free zones in the fluid's interior. A WKBJ study of the latter modes provides a longer intermediate time evolution. This part of the analysis is technical, as the corresponding spectrum is dominated by asymptotically coalescing turning points in the limit of large Pe numbers. When large scale initial data components are present, the transient regime of the WKBJ (anomalous) modes evolves into one governed by Taylor dispersion. This is studied by a regular perturbation expansion of the spectrum in the small wavenumber regimes.
Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ
2012-01-01
In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356
NASA Astrophysics Data System (ADS)
Singh, Sunny; Kaur, Harsimran; Sharma, Shivalika; Aggarwal, Priyanka; Hazra, Ram Kuntal
2017-04-01
The understanding of the physics of exciton, bi-exciton, tri-exciton and the subsequent insight into controlling the properties of mesoscopic systems holds the key to various exotic optical, electrical and magnetic phenomena like superconductivity, Mott insulation, Quantum Hall effect etc. Many of exciton properties are similar to atomic hydrogen that attracts researchers to explore electronic structure of exciton in quantum dots, but nontriviality arises due to coulombic interactions among electrons and holes. We propose an exact integral of coulomb (exchange) correlation in terms of finitely summed Lauricella functions to examine 3-D exciton of harmonic dots confined in zero and non-zero arbitrary magnetic field. The highlight of our work is the use of exact variational solution for coloumbic interaction between the hole and the electron and evaluation of the cross terms arising out of the coupling among centre-of-mass and relative coordinates. We also have extended the size of the system to generalized N-body problem with N=3,4 for tri-exciton (e-e-h/e-h-h)
A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu
Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widlund, Olof B.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less
Bush Encroachment Mapping for Africa - Multi-Scale Analysis with Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Graw, V. A. M.; Oldenburg, C.; Dubovyk, O.
2015-12-01
Bush encroachment describes a global problem which is especially facing the savanna ecosystem in Africa. Livestock is directly affected by decreasing grasslands and inedible invasive species which defines the process of bush encroachment. For many small scale farmers in developing countries livestock represents a type of insurance in times of crop failure or drought. Among that bush encroachment is also a problem for crop production. Studies on the mapping of bush encroachment so far focus on small scales using high-resolution data and rarely provide information beyond the national level. Therefore a process chain was developed using a multi-scale approach to detect bush encroachment for whole Africa. The bush encroachment map is calibrated with ground truth data provided by experts in Southern, Eastern and Western Africa. By up-scaling location specific information on different levels of remote sensing imagery - 30m with Landsat images and 250m with MODIS data - a map is created showing potential and actual areas of bush encroachment on the African continent and thereby provides an innovative approach to map bush encroachment on the regional scale. A classification approach links location data based on GPS information from experts to the respective pixel in the remote sensing imagery. Supervised classification is used while actual bush encroachment information represents the training samples for the up-scaling. The classification technique is based on Random Forests and regression trees, a machine learning classification approach. Working on multiple scales and with the help of field data an innovative approach can be presented showing areas affected by bush encroachment on the African continent. This information can help to prevent further grassland decrease and identify those regions where land management strategies are of high importance to sustain livestock keeping and thereby also secure livelihoods in rural areas.
Autonomous quantum to classical transitions and the generalized imaging theorem
NASA Astrophysics Data System (ADS)
Briggs, John S.; Feagin, James M.
2016-03-01
The mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. Practically it is of importance for the interpretation of multi-particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. Here we prove the generalized imaging theorem which shows that the spatial wave function of any multi-particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. Currently, the quantum to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. The imaging theorem arises from unitary Schrödinger propagation and so is valid without any environmental interaction. It implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.
Autonomous quantum to classical transitions and the generalized imaging theorem
Briggs, John S.; Feagin, James M.
2016-03-16
The mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. Practically it is of importance for the interpretation of multi-particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. We prove the generalized imaging theorem which shows that the spatial wave function of any multi-particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. Now, the quantummore » to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. The imaging theorem arises from unitary Schrödinger propagation and so is valid without any environmental interaction. It implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.« less
An instant multi-responsive porous polymer actuator driven by solvent molecule sorption.
Zhao, Qiang; Dunlop, John W C; Qiu, Xunlin; Huang, Feihe; Zhang, Zibin; Heyda, Jan; Dzubiella, Joachim; Antonietti, Markus; Yuan, Jiayin
2014-07-01
Fast actuation speed, large-shape deformation and robust responsiveness are critical to synthetic soft actuators. A simultaneous optimization of all these aspects without trade-offs remains unresolved. Here we describe porous polymer actuators that bend in response to acetone vapour (24 kPa, 20 °C) at a speed of an order of magnitude faster than the state-of-the-art, coupled with a large-scale locomotion. They are meanwhile multi-responsive towards a variety of organic vapours in both the dry and wet states, thus distinctive from the traditional gel actuation systems that become inactive when dried. The actuator is easy-to-make and survives even after hydrothermal processing (200 °C, 24 h) and pressing-pressure (100 MPa) treatments. In addition, the beneficial responsiveness is transferable, being able to turn 'inert' objects into actuators through surface coating. This advanced actuator arises from the unique combination of porous morphology, gradient structure and the interaction between solvent molecules and actuator materials.
2016-01-01
The Cancer Target Discovery and Development (CTD2) Network was established to accelerate the transformation of “Big Data” into novel pharmacological targets, lead compounds, and biomarkers for rapid translation into improved patient outcomes. It rapidly became clear in this collaborative network that a key central issue was to define what constitutes sufficient computational or experimental evidence to support a biologically or clinically relevant finding. This manuscript represents a first attempt to delineate the challenges of supporting and confirming discoveries arising from the systematic analysis of large-scale data resources in a collaborative work environment and to provide a framework that would begin a community discussion to resolve these challenges. The Network implemented a multi-Tier framework designed to substantiate the biological and biomedical relevance as well as the reproducibility of data and insights resulting from its collaborative activities. The same approach can be used by the broad scientific community to drive development of novel therapeutic and biomarker strategies for cancer. PMID:27401613
Linking Executive Function and Peer Problems from Early Childhood Through Middle Adolescence.
Holmes, Christopher J; Kim-Spoon, Jungmeen; Deater-Deckard, Kirby
2016-01-01
Peer interactions and executive function play central roles in the development of healthy children, as peer problems have been indicative of lower cognitive competencies such as self-regulatory behavior and poor executive function has been indicative of problem behaviors and social dysfunction. However, few studies have focused on the relation between peer interactions and executive function and the underlying mechanisms that may create this link. Using a national sample (n = 1164, 48.6% female) from the Study of Early Child Care and Youth Development (SECCYD), we analyzed executive function and peer problems (including victimization and rejection) across three waves within each domain (executive function or peer problems), beginning in early childhood and ending in middle adolescence. Executive function was measured as a multi-method, multi-informant composite including reports from parents on the Children's Behavior Questionnaire and Child Behavior Checklist and child's performance on behavioral tasks including the Continuous Performance Task, Woodcock-Johnson, Tower of Hanoi, Operation Span Task, Stroop, and Tower of London. Peer problems were measured as a multi-informant composite including self, teacher, and afterschool caregiver reports on multiple peer-relationship scales. Using a cross-lagged design, our Structural Equation Modeling findings suggested that experiencing peer problems contributed to lower executive function later in childhood and better executive function reduced the likelihood of experiencing peer problems later in childhood and middle adolescence, although these relations weakened as a child moves into adolescence. The results highlight that peer relationships are involved in the development of strengths and deficits in executive function and vice versa.
Linking Executive Function and Peer Problems from Early Childhood through Middle Adolescence
Holmes, Christopher J.; Kim-Spoon, Jungmeen; Deater-Deckard, Kirby
2015-01-01
Peer interactions and executive function play central roles in the development of healthy children, as peer problems have been indicative of lower cognitive competencies such as self-regulatory behavior and poor executive function has been indicative of problem behaviors and social dysfunction. However, few studies have focused on the relation between peer interactions and executive function and the underlying mechanisms that may create this link. Using a national sample (n = 1,164, 48.6% female) from the Study of Early Child Care and Youth Development (SECCYD), we analyzed executive function and peer problems (including victimization and rejection) across three waves within each domain (executive function or peer problems), beginning in early childhood and ending in middle adolescence. Executive function was measured as a multi-method, multi-informant composite including reports from parents on the Children’s Behavior Questionnaire and Child Behavior Checklist and child’s performance on behavioral tasks including the Continuous Performance Task, Woodcock-Johnson, Tower of Hanoi, Operation Span Task, Stroop, and Tower of London. Peer problems were measured as a multi-informant composite including self, teacher, and after school caregiver reports on multiple peer-relationship scales. Using a cross-lagged design, our Structural Equation Modeling findings suggested that experiencing peer problems contributed to lower executive function later in childhood and better executive function reduced the likelihood of experiencing peer problems later in childhood and middle adolescence, although these relations weakened as a child moves into adolescence. The results highlight that peer relationships are involved in the development of strengths and deficits in executive function and vice versa. PMID:26096194
Modelling strategies to predict the multi-scale effects of rural land management change
NASA Astrophysics Data System (ADS)
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.
NASA Astrophysics Data System (ADS)
Kobylkin, Konstantin
2016-10-01
Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; ...
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Everaers, Ralf; Rosa, Angelo
2012-01-07
The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.
Principles and practical implementation for high resolution multi-sensor QPE
NASA Astrophysics Data System (ADS)
Chandra, C. V.; Lim, S.; Cifelli, R.
2011-12-01
The multi-sensor Quantitative Precipitation Estimation (MPE) is a principle and a practical concept and is becoming a well-known term in the scientific circles of hydrology and atmospheric science. The main challenge in QPE is that precipitation is a highly variable quantity with extensive spatial and temporal variability at multiple scales. There are MPE products produced from satellites, radars, models and ground sensors. There are MPE products at global scale (Heinemann et al. 2002), continental scale (Seo et al. 2010; Zhang et al. 2011) and regional scale (Kitzmiller et al. 2011). Lots of the MPE products are used to alleviate the problems of one type of sensor by another. Some multi-sensor products are used to move across scales. This paper looks at a comprehensive view of the "concept of multi sensor precipitation estimate", from different perspectives. This paper delineates the MPE problem into three categories namely, a) Scale based MPE, b) MPE for accuracy enhancement and coverage and c) Integrative across scales. For example, by introducing dual polarization radar data to the MPE system, QPE can be improved significantly. In last decade, dual polarization radars are becoming an important tool for QPE in operational networks. Dual polarization radars offer an advantage to interpret more accurate physical models by providing information of the size, shape, phase and orientation of hydrometers (Bringi and Chandrasekar 2001). In addition, these systems have the ability to provide measurements that are immune to absolute radar calibration and partial beam blockage as well as help in data quality enhancement. By integrating these characteristics of dual polarization radar, QPE performance can be improved in comparison of single polarization radar based QPE (Cifelli and Chandrasekar 2010). Dual-polarization techniques have been applied to S and C band radar systems for several decades and higher frequency system such as X band are now widely available to the radar community. One solution to the dilemma of precipitation variability across scales can be to supplement existing long-range radar networks with short-range higher frequency systems (X band). The smaller X band systems provide more portability and higher data resolution, and networks of these systems may be a cost-effective option for improved rainfall estimation for radar networks with large separation distances (McLaughlin et al. 2009). This paper will describe the principles of the MPE concept and implementation issues of within the context of the classification described above.
Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches
NASA Astrophysics Data System (ADS)
Egging, Rudolf Gerardus
This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP. 1 www.gecforum.org
High order solution of Poisson problems with piecewise constant coefficients and interface jumps
NASA Astrophysics Data System (ADS)
Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben
2017-04-01
We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.
Ethnicity and American Group Life. A Bibliography.
ERIC Educational Resources Information Center
Weed, Perry L., Comp.
This bibliography grew out of a broad scale effort by the American Jewish Committee, especially its National Project on Ethnic America, to focus new attention on the positive aspects of multi-ethnicity in our society, and also to encourage deeper study and programming for solving the problems of polarization, fragmentation, and white ethnic…
FAST: A multi-processed environment for visualization of computational fluid dynamics
NASA Technical Reports Server (NTRS)
Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin
1991-01-01
Three-dimensional, unsteady, multi-zoned fluid dynamics simulations over full scale aircraft are typical of the problems being investigated at NASA Ames' Numerical Aerodynamic Simulation (NAS) facility on CRAY2 and CRAY-YMP supercomputers. With multiple processor workstations available in the 10-30 Mflop range, we feel that these new developments in scientific computing warrant a new approach to the design and implementation of analysis tools. These larger, more complex problems create a need for new visualization techniques not possible with the existing software or systems available as of this writing. The visualization techniques will change as the supercomputing environment, and hence the scientific methods employed, evolves even further. The Flow Analysis Software Toolkit (FAST), an implementation of a software system for fluid mechanics analysis, is discussed.
Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1999-01-01
The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.
The design and implementation of hydrographical information management system (HIMS)
NASA Astrophysics Data System (ADS)
Sui, Haigang; Hua, Li; Wang, Qi; Zhang, Anming
2005-10-01
With the development of hydrographical work and information techniques, the large variety of hydrographical information including electronic charts, documents and other materials are widely used, and the traditional management mode and techniques are unsuitable for the development of the Chinese Marine Safety Administration Bureau (CMSAB). How to manage all kinds of hydrographical information has become an important and urgent problem. A lot of advanced techniques including GIS, RS, spatial database management and VR techniques are introduced for solving these problems. Some design principles and key techniques of the HIMS including the mixed mode base on B/S, C/S and stand-alone computer mode, multi-source & multi-scale data organization and management, multi-source data integration and diverse visualization of digital chart, efficient security control strategies are illustrated in detail. Based on the above ideas and strategies, an integrated system named Hydrographical Information Management System (HIMS) was developed. And the HIMS has been applied in the Shanghai Marine Safety Administration Bureau and obtained good evaluation.
Trajectory optimization for lunar soft landing with complex constraints
NASA Astrophysics Data System (ADS)
Chu, Huiping; Ma, Lin; Wang, Kexin; Shao, Zhijiang; Song, Zhengyu
2017-11-01
A unified trajectory optimization framework with initialization strategies is proposed in this paper for lunar soft landing for various missions with specific requirements. Two main missions of interest are Apollo-like Landing from low lunar orbit and Vertical Takeoff Vertical Landing (a promising mobility method) on the lunar surface. The trajectory optimization is characterized by difficulties arising from discontinuous thrust, multi-phase connections, jump of attitude angle, and obstacles avoidance. Here R-function is applied to deal with the discontinuities of thrust, checkpoint constraints are introduced to connect multiple landing phases, attitude angular rate is designed to get rid of radical changes, and safeguards are imposed to avoid collision with obstacles. The resulting dynamic problems are generally with complex constraints. The unified framework based on Gauss Pseudospectral Method (GPM) and Nonlinear Programming (NLP) solver are designed to solve the problems efficiently. Advanced initialization strategies are developed to enhance both the convergence and computation efficiency. Numerical results demonstrate the adaptability of the framework for various landing missions, and the performance of successful solution of difficult dynamic problems.
P-Hint-Hunt: a deep parallelized whole genome DNA methylation detection tool.
Peng, Shaoliang; Yang, Shunyun; Gao, Ming; Liao, Xiangke; Liu, Jie; Yang, Canqun; Wu, Chengkun; Yu, Wenqiang
2017-03-14
The increasing studies have been conducted using whole genome DNA methylation detection as one of the most important part of epigenetics research to find the significant relationships among DNA methylation and several typical diseases, such as cancers and diabetes. In many of those studies, mapping the bisulfite treated sequence to the whole genome has been the main method to study DNA cytosine methylation. However, today's relative tools almost suffer from inaccuracies and time-consuming problems. In our study, we designed a new DNA methylation prediction tool ("Hint-Hunt") to solve the problem. By having an optimal complex alignment computation and Smith-Waterman matrix dynamic programming, Hint-Hunt could analyze and predict the DNA methylation status. But when Hint-Hunt tried to predict DNA methylation status with large-scale dataset, there are still slow speed and low temporal-spatial efficiency problems. In order to solve the problems of Smith-Waterman dynamic programming and low temporal-spatial efficiency, we further design a deep parallelized whole genome DNA methylation detection tool ("P-Hint-Hunt") on Tianhe-2 (TH-2) supercomputer. To the best of our knowledge, P-Hint-Hunt is the first parallel DNA methylation detection tool with a high speed-up to process large-scale dataset, and could run both on CPU and Intel Xeon Phi coprocessors. Moreover, we deploy and evaluate Hint-Hunt and P-Hint-Hunt on TH-2 supercomputer in different scales. The experimental results illuminate our tools eliminate the deviation caused by bisulfite treatment in mapping procedure and the multi-level parallel program yields a 48 times speed-up with 64 threads. P-Hint-Hunt gain a deep acceleration on CPU and Intel Xeon Phi heterogeneous platform, which gives full play of the advantages of multi-cores (CPU) and many-cores (Phi).
NASA Astrophysics Data System (ADS)
Haavisto, Sanna; Cardona, Maria J.; Salmela, Juha; Powell, Robert L.; McCarthy, Michael J.; Kataja, Markku; Koponen, Antti I.
2017-11-01
A hybrid multi-scale velocimetry method utilizing Doppler optical coherence tomography in combination with either magnetic resonance imaging or ultrasound velocity profiling is used to investigate pipe flow of four rheologically different working fluids under varying flow regimes. These fluids include water, an aqueous xanthan gum solution, a softwood fiber suspension, and a microfibrillated cellulose suspension. The measurement setup enables not only the analysis of the rheological (bulk) behavior of a studied fluid but gives simultaneously information on their wall layer dynamics, both of which are needed for analyzing and solving practical fluid flow-related problems. Preliminary novel results on rheological and boundary layer flow properties of the working fluids are reported and the potential of the hybrid measurement setup is demonstrated.
Multi-time scale control of demand flexibility in smart distribution networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu; Myers, Kurt; Bak-Jensen, Birgitte
This study presents a multi-timescale control strategy to deploy demand flexibilities of electric vehicles (EV) for providing system balancing and local congestion management by simultaneously ensuring economic benefits to participating actors. First, the EV charging problem from consumer, aggregator, and grid operator’s perspective is investigated. A hierarchical control architecture (HCA) comprising scheduling, coordinative, and adaptive layers is then designed to realize their coordinative goal. This is realized by integrating a multi-time scale control, which works from a day-ahead scheduling up to real-time adaptive control. The performance of the developed method is investigated with high EV penetration in a typical distributionmore » network. The simulation results demonstrates that HCA exploit EV flexibility to solve grid unbalancing and congestions with simultaneous maximization of economic benefits by ensuring EV participation to day-ahead, balancing, and regulation markets. For the given network configuration and pricing structure, HCA ensures the EV owners to get paid up to 5 times the cost they were paying without control.« less
Multi-time scale control of demand flexibility in smart distribution networks
Bhattarai, Bishnu; Myers, Kurt; Bak-Jensen, Birgitte; ...
2017-01-01
This study presents a multi-timescale control strategy to deploy demand flexibilities of electric vehicles (EV) for providing system balancing and local congestion management by simultaneously ensuring economic benefits to participating actors. First, the EV charging problem from consumer, aggregator, and grid operator’s perspective is investigated. A hierarchical control architecture (HCA) comprising scheduling, coordinative, and adaptive layers is then designed to realize their coordinative goal. This is realized by integrating a multi-time scale control, which works from a day-ahead scheduling up to real-time adaptive control. The performance of the developed method is investigated with high EV penetration in a typical distributionmore » network. The simulation results demonstrates that HCA exploit EV flexibility to solve grid unbalancing and congestions with simultaneous maximization of economic benefits by ensuring EV participation to day-ahead, balancing, and regulation markets. For the given network configuration and pricing structure, HCA ensures the EV owners to get paid up to 5 times the cost they were paying without control.« less
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
Distributed-Memory Fast Maximal Independent Set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew
The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less
Coal conversion: description of technologies and necessary biomedical and environmental research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-08-01
This document contains a description of the biomedical and environmental research necessary to ensure the timely attainment of coal conversion technologies amenable to man and his environment. The document is divided into three sections. The first deals with the types of processes currently being considered for development; the data currently available on composition of product, process and product streams, and their potential effects; and problems that might arise from transportation and use of products. Section II is concerned with a description of the necessary research in each of the King-Muir categories, while the third section presents the research strategies necessarymore » to assess the potential problems at the conversion plant (site specific) and those problems that might effect the general public and environment as a result of the operation of large-scale coal conversion plants.« less
Balanced program plan. Volume IV. Coal conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, C. R.; Reichle, D. E.; Gehrs, C. W.
1976-05-01
This document contains a description of the biomedical and environmental research necessary to ensure the timely attainment of coal conversion technologies amenable to man and his environment. The document is divided into three sections. The first deals with the types of processes currently being considered for development; the data currently available on composition of product, process and product streams, and their potential effects; and problems that might arise from transportation and use of products. Section II is concerned with a description of the necessary research in each of the King-Muir categories, while the third section presents the research strategies necessarymore » to assess the potential problems at the conversion plant (site specific) and those problems that might effect the general public and environment as a result of the operation of large-scale coal conversion plants. (auth)« less
Balanced program plan. Volume 4. Coal conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-05-01
This document contains a description of the biomedical and environmental research necessary to ensure the timely attainment of coal conversion technologies amenable to man and his environment. The document is divided into three sections. The first deals with the types of processes currently being considered for development; the data currently available on composition of product, process and product streams, and their potential effects; and problems that might arise from transportation and use of products. Section II is concerned with a description of the necessary research in each of the King-Muir categories, while the third section presents the research strategies necessarymore » to assess the potential problems at the conversion plant (site specific) and those problems that might effect the general public and environment as a result of the operation of large-scale coal conversion plants.« less
NASA Astrophysics Data System (ADS)
Harvey, J. W.; Packman, A. I.
2010-12-01
Surface water and groundwater flow interact with the channel geomorphology and sediments in ways that determine how material is transported, stored, and transformed in stream corridors. Solute and sediment transport affect important ecological processes such as carbon and nutrient dynamics and stream metabolism, processes that are fundamental to stream health and function. Many individual mechanisms of transport and storage of solute and sediment have been studied, including surface water exchange between the main channel and side pools, hyporheic flow through shallow and deep subsurface flow paths, and sediment transport during both baseflow and floods. A significant challenge arises from non-linear and scale-dependent transport resulting from natural, fractal fluvial topography and associated broad, multi-scale hydrologic interactions. Connections between processes and linkages across scales are not well understood, imposing significant limitations on system predictability. The whole-stream tracer experimental approach is popular because of the spatial averaging of heterogeneous processes; however the tracer results, implemented alone and analyzed using typical models, cannot usually predict transport beyond the very specific conditions of the experiment. Furthermore, the results of whole stream tracer experiments tend to be biased due to unavoidable limitations associated with sampling frequency, measurement sensitivity, and experiment duration. We recommend that whole-stream tracer additions be augmented with hydraulic and topographic measurements and also with additional tracer measurements made directly in storage zones. We present examples of measurements that encompass interactions across spatial and temporal scales and models that are transferable to a wide range of flow and geomorphic conditions. These results show how the competitive effects between the different forces driving hyporheic flow, operating at different spatial scales, creates a situation where hyporheic fluxes cannot be accurately estimated without considering multi-scale effects. Our modeling captures the dominance of small-scale features such as bedforms that drive the majority of hyporheic flow, but it also captures how hyporheic flow is substantially modified by relatively small changes in streamflow or groundwater flow. The additional field measurements add sensitivity and power to whole stream tracer additions by improving resolution of the relative importance of storage at different scales (e.g. bar-scale versus bedform-scale). This information is critical in identifying hot spots where important biogeochemical reactions occur. In summary, interpreting multi-scale interactions in streams requires models that are physically based and that incorporate non-linear process dynamics. Such models can take advantage of increasingly comprehensive field data to integrate transport processes across spatially variable flow and geomorphic conditions. The most useful field and modeling approaches will be those that are simple enough to be easily implemented by users from various disciplines but comprehensive enough to produce meaningful predictions for a wide range of flow and geomorphic scenarios. This capability is needed to support improved strategies for protecting stream ecological health in the face of accelerating land use and climate change.
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2017-04-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Dual pricing algorithm in ISO markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Neill, Richard P.; Castillo, Anya; Eldridge, Brent
The challenge to create efficient market clearing prices in centralized day-ahead electricity markets arises from inherent non-convexities in unit commitment problems. When this aspect is ignored, marginal prices may result in economic losses to market participants who are part of the welfare maximizing solution. In this essay, we present an axiomatic approach to efficient prices and cost allocation for a revenue neutral and non-confiscatory day-ahead market. Current cost allocation practices do not adequately attribute costs based on transparent cost causation criteria. Instead we propose an ex post multi-part pricing scheme, which we refer to as the Dual Pricing Algorithm. Lastly,more » our approach can be incorporated into current dayahead markets without altering the market equilibrium.« less
First principles cable braid electromagnetic penetration model
Warne, Larry Kevin; Langston, William L.; Basilio, Lorena I.; ...
2016-01-01
The model for penetration of a wire braid is rigorously formulated. Integral formulas are developed from energy principles for both self and transfer immittances in terms of potentials for the fields. The detailed boundary value problem for the wire braid is also set up in a very efficient manner; the braid wires act as sources for the potentials in the form of a sequence of line multi-poles with unknown coefficients that are determined by means of conditions arising from the wire surface boundary conditions. Approximations are introduced to relate the local properties of the braid wires to a simplified infinitemore » periodic planar geometry. Furthermore, this is used to treat nonuniform coaxial geometries including eccentric interior coaxial arrangements and an exterior ground plane.« less
Hydrodynamically induced oscillations and traffic dynamics in 1D microfludic networks
NASA Astrophysics Data System (ADS)
Bartolo, Denis; Jeanneret, Raphael
2011-03-01
We report on the traffic dynamics of particles driven through a minimal microfluidic network. Even in the minimal network consisting in a single loop, the traffic dynamics has proven to yield complex temporal patterns, including periodic, multi-periodic or chaotic sequences. This complex dynamics arises from the strongly nonlinear hydrodynamic interactions between the particles, that takes place at a junction. To better understand the consequences of this nontrivial coupling, we combined theoretical, numerical and experimental efforts and solved the 3-body problem in a 1D loop network. This apparently simple dynamical system revealed a rich and unexpected dynamics, including coherent spontaneous oscillations along closed orbits. Striking similarities between Hamiltonian systems and this driven dissipative system will be explained.
Dual pricing algorithm in ISO markets
O'Neill, Richard P.; Castillo, Anya; Eldridge, Brent; ...
2016-10-10
The challenge to create efficient market clearing prices in centralized day-ahead electricity markets arises from inherent non-convexities in unit commitment problems. When this aspect is ignored, marginal prices may result in economic losses to market participants who are part of the welfare maximizing solution. In this essay, we present an axiomatic approach to efficient prices and cost allocation for a revenue neutral and non-confiscatory day-ahead market. Current cost allocation practices do not adequately attribute costs based on transparent cost causation criteria. Instead we propose an ex post multi-part pricing scheme, which we refer to as the Dual Pricing Algorithm. Lastly,more » our approach can be incorporated into current dayahead markets without altering the market equilibrium.« less
Maximal clique enumeration with data-parallel primitives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lessley, Brenton; Perciano, Talita; Mathai, Manish
The enumeration of all maximal cliques in an undirected graph is a fundamental problem arising in several research areas. We consider maximal clique enumeration on shared-memory, multi-core architectures and introduce an approach consisting entirely of data-parallel operations, in an effort to achieve efficient and portable performance across different architectures. We study the performance of the algorithm via experiments varying over benchmark graphs and architectures. Overall, we observe that our algorithm achieves up to a 33-time speedup and 9-time speedup over state-of-the-art distributed and serial algorithms, respectively, for graphs with higher ratios of maximal cliques to total cliques. Further, we attainmore » additional speedups on a GPU architecture, demonstrating the portable performance of our data-parallel design.« less
SVM-Fold: a tool for discriminative multi-class protein fold and superfamily recognition
Melvin, Iain; Ie, Eugene; Kuang, Rui; Weston, Jason; Stafford, William Noble; Leslie, Christina
2007-01-01
Background Predicting a protein's structural class from its amino acid sequence is a fundamental problem in computational biology. Much recent work has focused on developing new representations for protein sequences, called string kernels, for use with support vector machine (SVM) classifiers. However, while some of these approaches exhibit state-of-the-art performance at the binary protein classification problem, i.e. discriminating between a particular protein class and all other classes, few of these studies have addressed the real problem of multi-class superfamily or fold recognition. Moreover, there are only limited software tools and systems for SVM-based protein classification available to the bioinformatics community. Results We present a new multi-class SVM-based protein fold and superfamily recognition system and web server called SVM-Fold, which can be found at . Our system uses an efficient implementation of a state-of-the-art string kernel for sequence profiles, called the profile kernel, where the underlying feature representation is a histogram of inexact matching k-mer frequencies. We also employ a novel machine learning approach to solve the difficult multi-class problem of classifying a sequence of amino acids into one of many known protein structural classes. Binary one-vs-the-rest SVM classifiers that are trained to recognize individual structural classes yield prediction scores that are not comparable, so that standard "one-vs-all" classification fails to perform well. Moreover, SVMs for classes at different levels of the protein structural hierarchy may make useful predictions, but one-vs-all does not try to combine these multiple predictions. To deal with these problems, our method learns relative weights between one-vs-the-rest classifiers and encodes information about the protein structural hierarchy for multi-class prediction. In large-scale benchmark results based on the SCOP database, our code weighting approach significantly improves on the standard one-vs-all method for both the superfamily and fold prediction in the remote homology setting and on the fold recognition problem. Moreover, our code weight learning algorithm strongly outperforms nearest-neighbor methods based on PSI-BLAST in terms of prediction accuracy on every structure classification problem we consider. Conclusion By combining state-of-the-art SVM kernel methods with a novel multi-class algorithm, the SVM-Fold system delivers efficient and accurate protein fold and superfamily recognition. PMID:17570145
Braithwaite, Jeffrey; Westbrook, Johanna; Pawsey, Marjorie; Greenfield, David; Naylor, Justine; Iedema, Rick; Runciman, Bill; Redman, Sally; Jorm, Christine; Robinson, Maureen; Nathan, Sally; Gibberd, Robert
2006-01-01
Background Accreditation has become ubiquitous across the international health care landscape. Award of full accreditation status in health care is viewed, as it is in other sectors, as a valid indicator of high quality organisational performance. However, few studies have empirically demonstrated this assertion. The value of accreditation, therefore, remains uncertain, and this persists as a central legitimacy problem for accreditation providers, policymakers and researchers. The question arises as to how best to research the validity, impact and value of accreditation processes in health care. Most health care organisations participate in some sort of accreditation process and thus it is not possible to study its merits using a randomised controlled strategy. Further, tools and processes for accreditation and organisational performance are multifaceted. Methods/design To understand the relationship between them a multi-method research approach is required which incorporates both quantitative and qualitative data. The generic nature of accreditation standard development and inspection within different sectors enhances the extent to which the findings of in-depth study of accreditation process in one industry can be generalised to other industries. This paper presents a research design which comprises a prospective, multi-method, multi-level, multi-disciplinary approach to assess the validity, impact and value of accreditation. Discussion The accreditation program which assesses over 1,000 health services in Australia is used as an exemplar for testing this design. The paper proposes this design as a framework suitable for application to future international research into accreditation. Our aim is to stimulate debate on the role of accreditation and how to research it. PMID:16968552
Erhart, Michael; Wetzel, Ralf M; Krügel, André; Ravens-Sieberer, Ulrike
2009-12-30
Telephone interviews have become established as an alternative to traditional mail surveys for collecting epidemiological data in public health research. However, the use of telephone and mail surveys raises the question of to what extent the results of different data collection methods deviate from one another. We therefore set out to study possible differences in using telephone and mail survey methods to measure health-related quality of life and emotional and behavioural problems in children and adolescents. A total of 1700 German children aged 8-18 years and their parents were interviewed randomly either by telephone or by mail. Health-related Quality of Life (HRQoL) and mental health problems (MHP) were assessed using the KINDL-R Quality of Life instrument and the Strengths and Difficulties Questionnaire (SDQ) children's self-report and parent proxy report versions. Mean Differences ("d" effect size) and differences in Cronbach alpha were examined across modes of administration. Pearson correlation between children's and parents' scores was calculated within a multi-trait-multi-method (MTMM) analysis and compared across survey modes using Fisher-Z transformation. Telephone and mail survey methods resulted in similar completion rates and similar socio-demographic and socio-economic makeups of the samples. Telephone methods resulted in more positive self- and parent proxy reports of children's HRQoL (SMD < or = 0.27) and MHP (SMD < or = 0.32) on many scales. For the phone administered KINDL, lower Cronbach alpha values (self/proxy Total: 0.79/0.84) were observed (mail survey self/proxy Total: 0.84/0.87). KINDL MTMM results were weaker for the phone surveys: mono-trait-multi-method mean r = 0.31 (mail: r = 0.45); multi-trait-mono-method mean (self/parents) r = 0.29/0.36 (mail: r = 0.34/0.40); multi-trait-multi-method mean r = 0.14 (mail: r = 0.21). Weaker MTMM results were also observed for the phone administered SDQ: mono-trait-multi-method mean r = 0.32 (mail: r = 0.40); multi-trait-mono-method mean (self/parents) r = 0.24/0.30 (mail: r = 0.20/0.32); multi-trait-multi-method mean r = 0.14 (mail = 0.14). The SDQ classification into borderline and abnormal for some scales was affected by the method (OR = 0.36-1.55). The observed differences between phone and mail surveys are small but should be regarded as relevant in certain settings. Therefore, while both methods are valid, some changes are necessary. The weaker reliability and MTMM validity associated with phone methods necessitates improved phone adaptations of paper and pencil questionnaires. The effects of phone versus mail survey modes are partly different across constructs/measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ijjas, Anna; Steinhardt, Paul J., E-mail: aijjas@princeton.edu, E-mail: steinh@princeton.edu
We introduce ''anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariantmore » spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.« less
Development and Applications of Advanced Electronic Structure Methods
NASA Astrophysics Data System (ADS)
Bell, Franziska
This dissertation contributes to three different areas in electronic structure theory. The first part of this thesis advances the fundamentals of orbital active spaces. Orbital active spaces are not only essential in multi-reference approaches, but have also become of interest in single-reference methods as they allow otherwise intractably large systems to be studied. However, despite their great importance, the optimal choice and, more importantly, their physical significance are still not fully understood. In order to address this problem, we studied the higher-order singular value decomposition (HOSVD) in the context of electronic structure methods. We were able to gain a physical understanding of the resulting orbitals and proved a connection to unrelaxed natural orbitals in the case of Moller-Plesset perturbation theory to second order (MP2). In the quest to find the optimal choice of the active space, we proposed a HOSVD for energy-weighted integrals, which yielded the fastest convergence in MP2 correlation energy for small- to medium-sized active spaces to date, and is also potentially transferable to coupled-cluster theory. In the second part, we studied monomeric and dimeric glycerol radical cations and their photo-induced dissociation in collaboration with Prof. Leone and his group. Understanding the mechanistic details involved in these processes are essential for further studies on the combustion of glycerol and carbohydrates. To our surprise, we found that in most cases, the experimentally observed appearance energies arise from the separation of product fragments from one another rather than rearrangement to products. The final chapters of this work focus on the development, assessment, and application of the spin-flip method, which is a single-reference approach, but capable of describing multi-reference problems. Systems exhibiting multi-reference character, which arises from the (near-) degeneracy of orbital energies, are amongst the most interesting in chemistry, biology and materials science, yet amongst the most challenging to study with electronic structure methods. In particular, we explored a substituted dimeric BPBP molecule with potential tetraradical character, which gained attention as one of the most promising candidates for an organic conductor. Furthermore, we extended the spin-flip approach to include variable orbital active spaces and multiple spin-flips. This allowed us to perform wave-function-based studies of ground- and excited-states of polynuclear metal complexes, polyradicals, and bond-dissociation processes involving three or more bonds.
EPR-dosimetry of ionizing radiation
NASA Astrophysics Data System (ADS)
Popova, Mariia; Vakhnin, Dmitrii; Tyshchenko, Igor
2017-09-01
This article discusses the problems that arise during the radiation sterilization of medical products. It is propose the solution based on alanine EPR-dosimetry. The parameters of spectrometer and methods of absorbed dose calculation are given. In addition, the problems that arise during heavy particles irradiation are investigated.
Uncertainty-Based Multi-Objective Optimization of Groundwater Remediation Design
NASA Astrophysics Data System (ADS)
Singh, A.; Minsker, B.
2003-12-01
Management of groundwater contamination is a cost-intensive undertaking filled with conflicting objectives and substantial uncertainty. A critical source of this uncertainty in groundwater remediation design problems comes from the hydraulic conductivity values for the aquifer, upon which the prediction of flow and transport of contaminants are dependent. For a remediation solution to be reliable in practice it is important that it is robust over the potential error in the model predictions. This work focuses on incorporating such uncertainty within a multi-objective optimization framework, to get reliable as well as Pareto optimal solutions. Previous research has shown that small amounts of sampling within a single-objective genetic algorithm can produce highly reliable solutions. However with multiple objectives the noise can interfere with the basic operations of a multi-objective solver, such as determining non-domination of individuals, diversity preservation, and elitism. This work proposes several approaches to improve the performance of noisy multi-objective solvers. These include a simple averaging approach, taking samples across the population (which we call extended averaging), and a stochastic optimization approach. All the approaches are tested on standard multi-objective benchmark problems and a hypothetical groundwater remediation case-study; the best-performing approach is then tested on a field-scale case at Umatilla Army Depot.
A variable-gain output feedback control design methodology
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.
1989-01-01
A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.
NASA Astrophysics Data System (ADS)
Jiang, Zeyun; Couples, Gary D.; Lewis, Helen; Mangione, Alessandro
2018-07-01
Limestones containing abundant disc-shaped fossil Nummulites can form significant hydrocarbon reservoirs but they have a distinctly heterogeneous distribution of pore shapes, sizes and connectivities, which make it particularly difficult to calculate petrophysical properties and consequent flow outcomes. The severity of the problem rests on the wide length-scale range from the millimetre scale of the fossil's pore space to the micron scale of rock matrix pores. This work develops a technique to incorporate multi-scale void systems into a pore network, which is used to calculate the petrophysical properties for subsequent flow simulations at different stages in the limestone's petrophysical evolution. While rock pore size, shape and connectivity can be determined, with varying levels of fidelity, using techniques such as X-ray computed tomography (CT) or scanning electron microscopy (SEM), this work represents a more challenging class where the rock of interest is insufficiently sampled or, as here, has been overprinted by extensive chemical diagenesis. The main challenge is integrating multi-scale void structures derived from both SEM and CT images, into a single model or a pore-scale network while still honouring the nature of the connections across these length scales. Pore network flow simulations are used to illustrate the technique but of equal importance, to demonstrate how supportable earlier-stage petrophysical property distributions can be used to assess the viability of several potential geological event sequences. The results of our flow simulations on generated models highlight the requirement for correct determination of the dominant pore scales (one plus of nm, μm, mm, cm), the spatial correlation and the cross-scale connections.
Understanding hydraulic fracturing: a multi-scale problem.
Hyman, J D; Jiménez-Martínez, J; Viswanathan, H S; Carey, J W; Porter, M L; Rougier, E; Karra, S; Kang, Q; Frash, L; Chen, L; Lei, Z; O'Malley, D; Makedonska, N
2016-10-13
Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages.This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).
Understanding hydraulic fracturing: a multi-scale problem
Hyman, J. D.; Jiménez-Martínez, J.; Viswanathan, H. S.; Carey, J. W.; Porter, M. L.; Rougier, E.; Karra, S.; Kang, Q.; Frash, L.; Chen, L.; Lei, Z.; O’Malley, D.; Makedonska, N.
2016-01-01
Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597789
Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0
NASA Astrophysics Data System (ADS)
Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun
2013-02-01
We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.
NASA Astrophysics Data System (ADS)
Donner, Reik V.; Potirakis, Stelios M.; Barbosa, Susana M.; Matos, Jose A. O.
2015-04-01
The presence or absence of long-range correlations in environmental radioactivity fluctuations has recently attracted considerable interest. Among a multiplicity of practically relevant applications, identifying and disentangling the environmental factors controlling the variable concentrations of the radioactive noble gas Radon is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we present a critical re-assessment of a multiplicity of complementary methods that have been previously applied for evaluating the presence of long-range correlations and fractal scaling in environmental Radon variations with a particular focus on the specific properties of the underlying time series. As an illustrative case study, we subsequently re-analyze two high-frequency records of indoor Radon concentrations from Coimbra, Portugal, each of which spans several months of continuous measurements at a high temporal resolution of five minutes. Our results reveal that at the study site, Radon concentrations exhibit complex multi-scale dynamics with qualitatively different properties at different time-scales: (i) essentially white noise in the high-frequency part (up to time-scales of about one hour), (ii) spurious indications of a non-stationary, apparently long-range correlated process (at time scales between hours and one day) arising from marked periodic components probably related to tidal frequencies, and (iii) low-frequency variability indicating a true long-range dependent process, which might be dominated by a response to meteorological drivers. In the presence of such multi-scale variability, common estimators of long-range memory in time series are necessarily prone to fail if applied to the raw data without previous separation of time-scales with qualitatively different dynamics. We emphasize that similar properties can be found in other types of geophysical time series (for example, tide gauge records), calling for a careful application of time series analysis tools when studying such data.
Parasol: An Architecture for Cross-Cloud Federated Graph Querying
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lieberman, Michael; Choudhury, Sutanay; Hughes, Marisa
2014-06-22
Large scale data fusion of multiple datasets can often provide in- sights that examining datasets individually cannot. However, when these datasets reside in different data centers and cannot be collocated due to technical, administrative, or policy barriers, a unique set of problems arise that hamper querying and data fusion. To ad- dress these problems, a system and architecture named Parasol is presented that enables federated queries over graph databases residing in multiple clouds. Parasol’s design is flexible and requires only minimal assumptions for participant clouds. Query optimization techniques are also described that are compatible with Parasol’s lightweight architecture. Experiments onmore » a prototype implementation of Parasol indicate its suitability for cross-cloud federated graph queries.« less
NASA Technical Reports Server (NTRS)
Rabitz, Herschel
1987-01-01
The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.
NASA Astrophysics Data System (ADS)
Harfst, S.; Portegies Zwart, S.; McMillan, S.
2008-12-01
We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.
Lenarda, P; Paggi, M
A comprehensive computational framework based on the finite element method for the simulation of coupled hygro-thermo-mechanical problems in photovoltaic laminates is herein proposed. While the thermo-mechanical problem takes place in the three-dimensional space of the laminate, moisture diffusion occurs in a two-dimensional domain represented by the polymeric layers and by the vertical channel cracks in the solar cells. Therefore, a geometrical multi-scale solution strategy is pursued by solving the partial differential equations governing heat transfer and thermo-elasticity in the three-dimensional space, and the partial differential equation for moisture diffusion in the two dimensional domains. By exploiting a staggered scheme, the thermo-mechanical problem is solved first via a fully implicit solution scheme in space and time, with a specific treatment of the polymeric layers as zero-thickness interfaces whose constitutive response is governed by a novel thermo-visco-elastic cohesive zone model based on fractional calculus. Temperature and relative displacements along the domains where moisture diffusion takes place are then projected to the finite element model of diffusion, coupled with the thermo-mechanical problem by the temperature and crack opening dependent diffusion coefficient. The application of the proposed method to photovoltaic modules pinpoints two important physical aspects: (i) moisture diffusion in humidity freeze tests with a temperature dependent diffusivity is a much slower process than in the case of a constant diffusion coefficient; (ii) channel cracks through Silicon solar cells significantly enhance moisture diffusion and electric degradation, as confirmed by experimental tests.
NASA Astrophysics Data System (ADS)
Heath, J. E.; Dewers, T. A.; McPherson, B. J.; Wilson, T. H.; Flach, T.
2009-12-01
Understanding and characterizing transport properties of fine-grained rocks is critical in development of shale gas plays or assessing retention of CO2 at geologic storage sites. Difficulties arise in that both small scale (i.e., ~ nm) properties of the rock matrix and much larger scale fractures, faults, and sedimentological architecture govern migration of multiphase fluids. We present a multi-scale investigation of sealing and transport properties of the Kirtland Formation, which is a regional aquitard and reservoir seal in the San Juan Basin, USA. Sub-micron dual FIB/SEM imaging and reconstruction of 3D pore networks in core samples reveal a variety of pore types, including slit-shaped pores that are co-located with sedimentary structures and variations in mineralogy. Micron-scale chemical analysis and XRD reveal a mixture of mixed-layer smectite/illite, chlorite, quartz, and feldspar with little organic matter. Analysis of sub-micron digital reconstructions, mercury capillary injection pressure, and gas breakthrough measurements indicate a high quality sealing matrix. Natural full and partially mineralized fractures observed in core and in FMI logs include those formed from early soil-forming processes, differential compaction, and tectonic events. The potential impact of both fracture and matrix properties on large-scale transport is investigated through an analysis of natural helium from core samples, 3D seismic data and poro-elastic modeling. While seismic interpretations suggest considerable fracturing of the Kirtland, large continuous fracture zones and faults extending through the seal to the surface cannot be inferred from the data. Observed Kirtland Formation multi-scale transport properties are included as part of a risk assessment methodology for CO2 storage. Acknowledgements: The authors gratefully acknowledge the U.S. Department of Energy’s (DOE) National Energy Technology Laboratory for sponsoring this project. The DOE’s Basic Energy Science Office funded the dual FIB/SEM analysis. The Kirtland Formation overlies the coal seams of the Fruitland into which CO2 has been injected as a Phase II demonstration of the Southwest Regional Partnership on Carbon Sequestration. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the U.S. Department of Energy under contract DE-ACOC4-94AL85000.
NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
Montage: Improvising in the Land of Action Research
ERIC Educational Resources Information Center
Windle, Sheila; Sefton, Terry
2011-01-01
This paper and its appended multi-media production describe the rationale and process of creating and presenting a "digitally saturated" (Lankshear & Knobel, 2003), multi-layered, synchronous "montage" (Denzin & Lincoln, 2003) of educational Action Research findings. The authors contend that this type of presentation, arising from the fusion of…
Mind the Costs: Rescaling and Multi-Level Environmental Governance in Venice Lagoon
Fritsch, Oliver
2010-01-01
Competences over environmental matters are distributed across agencies at different scales on a national-to-local continuum. This article adopts a transaction costs economics perspective in order to explore the question whether, in the light of a particular problem, the scale at which a certain competence is attributed can be reconsidered. Specifically, it tests whether a presumption of least-cost operation concerning an agency at a given scale can hold. By doing so, it investigates whether the rescaling of certain tasks, aiming at solving a scale-related problem, is likely to produce an increase in costs for day-to-day agency operations as compared to the status quo. The article explores such a perspective for the case of Venice Lagoon. The negative aspects of the present arrangement concerning fishery management and morphological remediation are directly linked to the scale of the agencies involved. The analysis suggests that scales have been chosen correctly, at least from the point of view of the costs incurred to the agencies involved. Consequently, a rescaling of those agencies does not represent a viable option. PMID:20162274
Mind the Costs: Rescaling and Multi-Level Environmental Governance in Venice Lagoon
NASA Astrophysics Data System (ADS)
Roggero, Matteo; Fritsch, Oliver
2010-07-01
Competences over environmental matters are distributed across agencies at different scales on a national-to-local continuum. This article adopts a transaction costs economics perspective in order to explore the question whether, in the light of a particular problem, the scale at which a certain competence is attributed can be reconsidered. Specifically, it tests whether a presumption of least-cost operation concerning an agency at a given scale can hold. By doing so, it investigates whether the rescaling of certain tasks, aiming at solving a scale-related problem, is likely to produce an increase in costs for day-to-day agency operations as compared to the status quo. The article explores such a perspective for the case of Venice Lagoon. The negative aspects of the present arrangement concerning fishery management and morphological remediation are directly linked to the scale of the agencies involved. The analysis suggests that scales have been chosen correctly, at least from the point of view of the costs incurred to the agencies involved. Consequently, a rescaling of those agencies does not represent a viable option.
Mind the costs: rescaling and multi-level environmental governance in Venice lagoon.
Roggero, Matteo; Fritsch, Oliver
2010-07-01
Competences over environmental matters are distributed across agencies at different scales on a national-to-local continuum. This article adopts a transaction costs economics perspective in order to explore the question whether, in the light of a particular problem, the scale at which a certain competence is attributed can be reconsidered. Specifically, it tests whether a presumption of least-cost operation concerning an agency at a given scale can hold. By doing so, it investigates whether the rescaling of certain tasks, aiming at solving a scale-related problem, is likely to produce an increase in costs for day-to-day agency operations as compared to the status quo. The article explores such a perspective for the case of Venice Lagoon. The negative aspects of the present arrangement concerning fishery management and morphological remediation are directly linked to the scale of the agencies involved. The analysis suggests that scales have been chosen correctly, at least from the point of view of the costs incurred to the agencies involved. Consequently, a rescaling of those agencies does not represent a viable option.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
Seeing in the dark - I. Multi-epoch alchemy
NASA Astrophysics Data System (ADS)
Huff, Eric M.; Hirata, Christopher M.; Mandelbaum, Rachel; Schlegel, David; Seljak, Uroš; Lupton, Robert H.
2014-05-01
Weak lensing by large-scale structure is an invaluable cosmological tool given that most of the energy density of the concordance cosmology is invisible. Several large ground-based imaging surveys will attempt to measure this effect over the coming decade, but reliable control of the spurious lensing signal introduced by atmospheric turbulence and telescope optics remains a challenging problem. We address this challenge with a demonstration that point spread function (PSF) effects on measured galaxy shapes in the Sloan Digital Sky Survey (SDSS) can be corrected with existing analysis techniques. In this work, we co-add existing SDSS imaging on the equatorial stripe in order to build a data set with the statistical power to measure cosmic shear, while using a rounding kernel method to null out the effects of the anisotropic PSF. We build a galaxy catalogue from the combined imaging, characterize its photometric properties and show that the spurious shear remaining in this catalogue after the PSF correction is negligible compared to the expected cosmic shear signal. We identify a new source of systematic error in the shear-shear autocorrelations arising from selection biases related to masking. Finally, we discuss the circumstances in which this method is expected to be useful for upcoming ground-based surveys that have lensing as one of the science goals, and identify the systematic errors that can reduce its efficacy.
Impact of the inherent separation of scales in the Navier-Stokes- alphabeta equations.
Kim, Tae-Yeon; Cassiani, Massimo; Albertson, John D; Dolbow, John E; Fried, Eliot; Gurtin, Morton E
2009-04-01
We study the effect of the length scales alpha and beta in the Navier-Stokes- alphabeta equations on the energy spectrum and the alignment between the vorticity and the eigenvectors of the stretching tensor in three-dimensional homogeneous and isotropic turbulent flows in a periodic cubic domain, including the limiting cases of the Navier-Stokes- alpha and Navier-Stokes equations. A significant increase in the accuracy of the energy spectrum at large wave numbers arises for beta
Ultra-Parameterized CAM: Progress Towards Low-Cloud Permitting Superparameterization
NASA Astrophysics Data System (ADS)
Parishani, H.; Pritchard, M. S.; Bretherton, C. S.; Khairoutdinov, M.; Wyant, M. C.; Singh, B.
2016-12-01
A leading source of uncertainty in climate feedback arises from the representation of low clouds, which are not resolved but depend on small-scale physical processes (e.g. entrainment, boundary layer turbulence) that are heavily parameterized. We show results from recent attempts to achieve an explicit representation of low clouds by pushing the computational limits of cloud superparameterization to resolve boundary-layer eddy scales relevant to marine stratocumulus (250m horizontal and 20m vertical length scales). This extreme configuration is called "ultraparameterization". Effects of varying horizontal vs. vertical resolution are analyzed in the context of altered constraints on the turbulent kinetic energy statistics of the marine boundary layer. We show that 250m embedded horizontal resolution leads to a more realistic boundary layer vertical structure, but also to an unrealistic cloud pulsation that cannibalizes time mean LWP. We explore the hypothesis that feedbacks involving horizontal advection (not typically encountered in offline LES that neglect this degree of freedom) may conspire to produce such effects and present strategies to compensate. The results are relevant to understanding the emergent behavior of quasi-resolved low cloud decks in a multi-scale modeling framework within a previously unencountered grey zone of better resolved boundary-layer turbulence.
NASA Astrophysics Data System (ADS)
Kim, S. C.; Hayter, E. J.; Pruhs, R.; Luong, P.; Lackey, T. C.
2016-12-01
The geophysical scale circulation of the Mid Atlantic Bight and hydrologic inputs from adjacent Chesapeake Bay watersheds and tributaries influences the hydrodynamics and transport of the James River estuary. Both barotropic and baroclinic transport govern the hydrodynamics of this partially stratified estuary. Modeling the placement of dredged sediment requires accommodating this wide spectrum of atmospheric and hydrodynamic scales. The Geophysical Scale Multi-Block (GSMB) Transport Modeling System is a collection of multiple well established and USACE approved process models. Taking advantage of the parallel computing capability of multi-block modeling, we performed one year three-dimensional modeling of hydrodynamics in supporting simulation of dredged sediment placements transport and morphology changes. Model forcing includes spatially and temporally varying meteorological conditions and hydrological inputs from the watershed. Surface heat flux estimates were derived from the National Solar Radiation Database (NSRDB). The open water boundary condition for water level was obtained from an ADCIRC model application of the U. S. East Coast. Temperature-salinity boundary conditions were obtained from the Environmental Protection Agency (EPA) Chesapeake Bay Program (CBP) long-term monitoring stations database. Simulated water levels were calibrated and verified by comparison with National Oceanic and Atmospheric Administration (NOAA) tide gage locations. A harmonic analysis of the modeled tides was performed and compared with NOAA tide prediction data. In addition, project specific circulation was verified using US Army Corps of Engineers (USACE) drogue data. Salinity and temperature transport was verified at seven CBP long term monitoring stations along the navigation channel. Simulation and analysis of model results suggest that GSMB is capable of resolving the long duration, multi-scale processes inherent to practical engineering problems such as dredged material placement stability.
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
NASA Astrophysics Data System (ADS)
Razguli, A. V.; Iroshnikov, N. G.; Larichev, A. V.; Romanenko, T. E.; Goncharov, A. S.
2017-05-01
In this paper we deal with the problem of optical sectioning. This is a post processing step while investigating of 3D translucent medical objects based on rapid refocusing of the imaging system by the adaptive optics technique. Each image, captured in focal plane, can be represented as the sum of in-focus true section and out-of-focus images of the neighboring sections of the depth that are undesirable in the subsequent reconstruction of 3D object. The problem of optical sectioning under consideration is to elaborate a robust approach capable of obtaining a stack of cross section images purified from such distortions. For a typical sectioning statement arising in ophthalmology we propose a local iterative method in Fourier spectral plane. Compared to the non-local constant parameter selection for the whole spectral domain, the method demonstrates both improved sectioning results and a good level of scalability when implemented on multi-core CPUs.
Kirkengen, Anna Luise; Ekeland, Tor-Johan; Getz, Linn; Hetlevik, Irene; Schei, Edvin; Ulvestad, Elling; Vetlesen, Arne Johan
2016-08-01
Escalating costs, increasing multi-morbidity, medically unexplained health problems, complex risk, poly-pharmacy and antibiotic resistance can be regarded as artefacts of the traditional knowledge production in Western medicine, arising from its particular worldview. Our paper presents a historically grounded critical analysis of this view. The materialistic shift of Enlightenment philosophy, separating subjectivity from bodily matter, became normative for modern medicine and yielded astonishing results. The traditional dichotomies of mind/body and subjective/objective are, however, incompatible with modern biological theory. Medical knowledge ignores central tenets of human existence, notably the physiological impact of subjective experience, relationships, history and sociocultural contexts. Biomedicine will not succeed in resolving today's poorly understood health problems by doing 'more of the same'. We must acknowledge that health, sickness and bodily functioning are interwoven with human meaning-production, fundamentally personal and biographical. This implies that the biomedical framework, although having engendered 'success stories' like the era of antibiotics, needs to be radically revised. © 2015 John Wiley & Sons, Ltd.
Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach
NASA Astrophysics Data System (ADS)
Bugatti, Alessandro; Flammini, Alessandra; Migliorati, Pierangelo
2002-12-01
We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.
Fessner, Wolf-Dieter
2015-12-25
Systems Biocatalysis is an emerging concept of organizing enzymes in vitro to construct complex reaction cascades for an efficient, sustainable synthesis of valuable chemical products. The strategy merges the synthetic focus of chemistry with the modular design of biological systems, which is similar to metabolic engineering of cellular production systems but can be realized at a far lower level of complexity from a true reductionist approach. Such operations are free from material erosion by competing metabolic pathways, from kinetic restrictions by physical barriers and regulating circuits, and from toxicity problems with reactive foreign substrates, which are notorious problems in whole-cell systems. A particular advantage of cell-free concepts arises from the inherent opportunity to construct novel biocatalytic reaction systems for the efficient synthesis of non-natural products ("artificial metabolisms") by using enzymes specifically chosen or engineered for non-natural substrate promiscuity. Examples illustrating the technology from our laboratory are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.
Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration
NASA Astrophysics Data System (ADS)
Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel
2017-11-01
In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.
NASA Astrophysics Data System (ADS)
Penta, Raimondo; Gerisch, Alf
2017-01-01
The classical asymptotic homogenization approach for linear elastic composites with discontinuous material properties is considered as a starting point. The sharp length scale separation between the fine periodic structure and the whole material formally leads to anisotropic elastic-type balance equations on the coarse scale, where the arising fourth rank operator is to be computed solving single periodic cell problems on the fine scale. After revisiting the derivation of the problem, which here explicitly points out how the discontinuity in the individual constituents' elastic coefficients translates into stress jump interface conditions for the cell problems, we prove that the gradient of the cell problem solution is minor symmetric and that its cell average is zero. This property holds for perfect interfaces only (i.e., when the elastic displacement is continuous across the composite's interface) and can be used to assess the accuracy of the computed numerical solutions. These facts are further exploited, together with the individual constituents' elastic coefficients and the specific form of the cell problems, to prove a theorem that characterizes the fourth rank operator appearing in the coarse-scale elastic-type balance equations as a composite material effective elasticity tensor. We both recover known facts, such as minor and major symmetries and positive definiteness, and establish new facts concerning the Voigt and Reuss bounds. The latter are shown for the first time without assuming any equivalence between coarse and fine-scale energies ( Hill's condition), which, in contrast to the case of representative volume elements, does not identically hold in the context of asymptotic homogenization. We conclude with instructive three-dimensional numerical simulations of a soft elastic matrix with an embedded cubic stiffer inclusion to show the profile of the physically relevant elastic moduli (Young's and shear moduli) and Poisson's ratio at increasing (up to 100 %) inclusion's volume fraction, thus providing a proxy for the design of artificial elastic composites.
a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks
NASA Astrophysics Data System (ADS)
Bottacin-Busolin, A.; Worman, A. L.
2013-12-01
A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance of the resulting policies was evaluated by simulating the online operating process for historical inflow scenarios and synthetic inflow forecasts. The simulations are based on a combined mid- and short-term planning model in which the value function derived in the mid-term planning phase provides the value of the policy at the end of the short-term operating horizon. While a purely deterministic linear analysis provided rather optimistic results, the stochastic model allowed for a more accurate evaluation of trade-offs and limitations of alternative operating strategies for the Dalälven reservoir network.
Hallock, Michael J.; Stone, John E.; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-01-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli. Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems. PMID:24882911
Hallock, Michael J; Stone, John E; Roberts, Elijah; Fry, Corey; Luthey-Schulten, Zaida
2014-05-01
Simulation of in vivo cellular processes with the reaction-diffusion master equation (RDME) is a computationally expensive task. Our previous software enabled simulation of inhomogeneous biochemical systems for small bacteria over long time scales using the MPD-RDME method on a single GPU. Simulations of larger eukaryotic systems exceed the on-board memory capacity of individual GPUs, and long time simulations of modest-sized cells such as yeast are impractical on a single GPU. We present a new multi-GPU parallel implementation of the MPD-RDME method based on a spatial decomposition approach that supports dynamic load balancing for workstations containing GPUs of varying performance and memory capacity. We take advantage of high-performance features of CUDA for peer-to-peer GPU memory transfers and evaluate the performance of our algorithms on state-of-the-art GPU devices. We present parallel e ciency and performance results for simulations using multiple GPUs as system size, particle counts, and number of reactions grow. We also demonstrate multi-GPU performance in simulations of the Min protein system in E. coli . Moreover, our multi-GPU decomposition and load balancing approach can be generalized to other lattice-based problems.
Viscous decay of nonlinear oscillations of a spherical bubble at large Reynolds number
NASA Astrophysics Data System (ADS)
Smith, W. R.; Wang, Q. X.
2017-08-01
The long-time viscous decay of large-amplitude bubble oscillations is considered in an incompressible Newtonian fluid, based on the Rayleigh-Plesset equation. At large Reynolds numbers, this is a multi-scaled problem with a short time scale associated with inertial oscillation and a long time scale associated with viscous damping. A multi-scaled perturbation method is thus employed to solve the problem. The leading-order analytical solution of the bubble radius history is obtained to the Rayleigh-Plesset equation in a closed form including both viscous and surface tension effects. Some important formulae are derived including the following: the average energy loss rate of the bubble system during each cycle of oscillation, an explicit formula for the dependence of the oscillation frequency on the energy, and an implicit formula for the amplitude envelope of the bubble radius as a function of the energy. Our theory shows that the energy of the bubble system and the frequency of oscillation do not change on the inertial time scale at leading order, the energy loss rate on the long viscous time scale being inversely proportional to the Reynolds number. These asymptotic predictions remain valid during each cycle of oscillation whether or not compressibility effects are significant. A systematic parametric analysis is carried out using the above formula for the energy of the bubble system, frequency of oscillation, and minimum/maximum bubble radii in terms of the Reynolds number, the dimensionless initial pressure of the bubble gases, and the Weber number. Our results show that the frequency and the decay rate have substantial variations over the lifetime of a decaying oscillation. The results also reveal that large-amplitude bubble oscillations are very sensitive to small changes in the initial conditions through large changes in the phase shift.
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
Multiscale recurrence analysis of spatio-temporal data
NASA Astrophysics Data System (ADS)
Riedl, M.; Marwan, N.; Kurths, J.
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
Multiscale recurrence analysis of spatio-temporal data.
Riedl, M; Marwan, N; Kurths, J
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks
NASA Astrophysics Data System (ADS)
Langner, Tobias; Schindelhauer, Christian; Souza, Alexander
We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.
Effect of thematic map misclassification on landscape multi-metric assessment.
Kleindl, William J; Powell, Scott L; Hauer, F Richard
2015-06-01
Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.
Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien
2015-08-01
Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.
Neutrino masses from neutral top partners
NASA Astrophysics Data System (ADS)
Batell, Brian; McCullough, Matthew
2015-10-01
We present theories of "natural neutrinos" in which neutral fermionic top partner fields are simultaneously the right-handed neutrinos (RHN), linking seemingly disparate aspects of the Standard Model structure: (a) The RHN top partners are responsible for the observed small neutrino masses, (b) they help ameliorate the tuning in the weak scale and address the little hierarchy problem, and (c) the factor of 3 arising from Nc in the top-loop Higgs mass corrections is countered by a factor of 3 from the number of vectorlike generations of RHN. The RHN top partners may arise in pseudo-Nambu-Goldstone-Boson Higgs models such as the twin Higgs, as well as more general composite, little, and orbifold Higgs scenarios, and three simple example models are presented. This framework firmly predicts a TeV-scale seesaw, as the RHN masses are bounded to be below the TeV scale by naturalness. The generation of light neutrino masses relies on a collective breaking of the lepton number, allowing for comparatively large neutrino Yukawa couplings and a rich associated phenomenology. The structure of the neutrino mass mechanism realizes in certain limits the inverse or linear classes of seesaw. Natural neutrino models are testable at a variety of current and future experiments, particularly in tests of lepton universality, searches for lepton flavor violation, and precision electroweak and Higgs coupling measurements possible at high energy e+e- and hadron colliders.
Error due to unresolved scales in estimation problems for atmospheric data assimilation
NASA Astrophysics Data System (ADS)
Janjic, Tijana
The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only modeling of the covariance matrix obtained by evaluating the covariance function at the observation points. We first assumed that this covariance matrix is stationary and that the unresolved scales are not correlated between the observation points, i.e., the matrix is diagonal, and that the values along the diagonal are constant. Tests with these assumptions were unsuccessful, indicating that a more sophisticated model of the covariance is needed for assimilation of data with nonstationary spectrum. A new method for modeling the covariance matrix based on an extended set of modeling assumptions is proposed. First, it is assumed that the covariance matrix is diagonal, that is, that the unresolved scales are not correlated between the observation points. It is postulated that the values on the diagonal depend on a wavenumber that is characteristic for the unresolved part of the spectrum. It is further postulated that this characteristic wavenumber can be diagnosed from the observations and from the estimate of the projection of the state that is being estimated. It is demonstrated that the new method successfully overcomes previously encountered difficulties.
A coupled theory for chemically active and deformable solids with mass diffusion and heat conduction
NASA Astrophysics Data System (ADS)
Zhang, Xiaolong; Zhong, Zheng
2017-10-01
To analyse the frequently encountered thermo-chemo-mechanical problems in chemically active material applications, we develop a thermodynamically-consistent continuum theory of coupled deformation, mass diffusion, heat conduction and chemical reaction. Basic balance equations of force, mass and energy are presented at first, and then fully coupled constitutive laws interpreting multi-field interactions and evolving equations governing irreversible fluxes are constructed according to the energy dissipation inequality and the chemical kinetics. To consider the essential distinction between mass diffusion and chemical reactions in affecting free energy and dissipations of a highly coupled system, we regard both the concentrations of diffusive species and the extent of reaction as independent state variables. This new formulation then distinguishes between the energy contribution from the diffusive species entering the solid and that from the subsequent chemical reactions occurring among these species and the host solid, which not only interact with stresses or strains in different manners and on different time scales, but also induce different variations of solid microstructures and material properties. Taking advantage of this new description, we further establish a specialized isothermal model to predict precisely the transient chemo-mechanical response of a swelling solid with a proposed volumetric constraint that accounts for material incompressibility. Coupled kinetics is incorporated to capture the volumetric swelling of the solid caused by imbibition of external species and the simultaneous dilation arised from chemical reactions between the diffusing species and the solid. The model is then exemplified with two numerical examples of transient swelling accompanied by chemical reaction. Various ratios of characteristic times of diffusion and chemical reaction are taken into account to shed light on the dependency on kinetic time scales of evolution patterns for a diffusion-reaction controlled deformable solid.
Satellite Imagery Analysis for Automated Global Food Security Forecasting
NASA Astrophysics Data System (ADS)
Moody, D.; Brumby, S. P.; Chartrand, R.; Keisler, R.; Mathis, M.; Beneke, C. M.; Nicholaeff, D.; Skillman, S.; Warren, M. S.; Poehnelt, J.
2017-12-01
The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. Cloud computing and storage, combined with recent advances in machine learning, are enabling understanding of the world at a scale and at a level of detail never before feasible. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and that can scale with the high-rate and dimensionality of imagery being collected. We focus on the problem of monitoring food crop productivity across the Middle East and North Africa, and show how an analysis-ready, multi-sensor data platform enables quick prototyping of satellite imagery analysis algorithms, from land use/land cover classification and natural resource mapping, to yearly and monthly vegetative health change trends at the structural field level.
Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
NASA Astrophysics Data System (ADS)
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
Multi-level systems modeling and optimization for novel aircraft
NASA Astrophysics Data System (ADS)
Subramanian, Shreyas Vathul
This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission best achieved via a large collection of interacting simple systems, or a relatively few highly capable, complex air vehicles). The vastly unexplored area of optimization in evolving design spaces will be studied and incorporated into the SoS optimization framework. We envision a framework that resembles a multi-level, mult-fidelity, multi-disciplinary assemblage of optimization problems. The challenge is not simply one of scaling up to a new level (the SoS), but recognizing that the aircraft sub-systems and the integrated vehicle are now intensely cyber-physical, with hardware and software components interacting in complex ways that give rise to new and improved capabilities. The work presented here is a step closer to modeling the information flow that exists in realistic SoS optimization problems between sub-contractors, contractors and the SoS architect.
A scalable multi-photon coincidence detector based on superconducting nanowires.
Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K
2018-06-04
Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.
Quantifying urban river-aquifer fluid exchange processes: a multi-scale problem.
Ellis, Paul A; Mackay, Rae; Rivett, Michael O
2007-04-01
Groundwater-river exchanges in an urban setting have been investigated through long term field monitoring and detailed modelling of a 7 km reach of the Tame river as it traverses the unconfined Triassic Sandstone aquifer that lies beneath the City of Birmingham, UK. Field investigations and numerical modelling have been completed at a range of spatial and temporal scales from the metre to the kilometre scale and from event (hourly) to multi-annual time scales. The objective has been to quantify the spatial and temporal flow distributions governing mixing processes at the aquifer-river interface that can affect the chemical activity in the hyporheic zone of this urbanised river. The hyporheic zone is defined to be the zone of physical mixing of river and aquifer water. The results highlight the multi-scale controls that govern the fluid exchange distributions that influence the thickness of the mixing zone between urban rivers and groundwater and the patterns of groundwater flow through the bed of the river. The morphologies of the urban river bed and the adjacent river bank sediments are found to be particularly influential in developing the mixing zone at the interface between river and groundwater. Pressure transients in the river are also found to exert an influence on velocity distribution in the bed material. Areas of significant mixing do not appear to be related to the areas of greatest groundwater discharge and therefore this relationship requires further investigation to quantify the actual remedial capacity of the physical hyporheic zone.
Single-user MIMO system, Painlevé transcendents, and double scaling
NASA Astrophysics Data System (ADS)
Chen, Hongmei; Chen, Min; Blower, Gordon; Chen, Yang
2017-12-01
In this paper, we study a particular Painlevé V (denoted PV) that arises from multi-input-multi-output wireless communication systems. Such PV appears through its intimate relation with the Hankel determinant that describes the moment generating function (MGF) of the Shannon capacity. This originates through the multiplication of the Laguerre weight or the gamma density xαe-x, x > 0, for α > -1 by (1 + x/t)λ with t > 0 a scaling parameter. Here the λ parameter "generates" the Shannon capacity; see Chen, Y. and McKay, M. R. [IEEE Trans. Inf. Theory 58, 4594-4634 (2012)]. It was found that the MGF has an integral representation as a functional of y(t) and y'(t), where y(t) satisfies the "classical form" of PV. In this paper, we consider the situation where n, the number of transmit antennas, (or the size of the random matrix), tends to infinity and the signal-to-noise ratio, P, tends to infinity such that s = 4n2/P is finite. Under such double scaling, the MGF, effectively an infinite determinant, has an integral representation in terms of a "lesser" PIII. We also consider the situations where α =k +1 /2 ,k ∈N , and α ∈ {0, 1, 2, …}, λ ∈ {1, 2, …}, linking the relevant quantity to a solution of the two-dimensional sine-Gordon equation in radial coordinates and a certain discrete Painlevé-II. From the large n asymptotic of the orthogonal polynomials, which appears naturally, we obtain the double scaled MGF for small and large s, together with the constant term in the large s expansion. With the aid of these, we derive a number of cumulants and find that the capacity distribution function is non-Gaussian.
NASA Astrophysics Data System (ADS)
Trujillo, N. A.; Heath, J. E.; Mozley, P.; Dewers, T. A.; Cather, M.
2016-12-01
Assessment of caprock sealing behavior for secure CO2 storage is a multiscale endeavor. Sealing behavior arises from the nano-scale capillarity of pore throats, but sealing lithologies alone do not guarantee an effective seal since bypass systems, such as connected, conductive fractures can compromise the integrity of the seal. We apply pore-to-formation-scale data to characterize the multiscale caprock sealing behavior of the Morrow shale and Thirteen Finger Limestone. This work is part of the Southwest Regional Partnership on Carbon Sequestration's Phase III project at the Farnsworth Unit, Texas. The caprock formations overlie the Morrow sandstone, the target for enhanced oil recovery and injection of over one million metric tons of anthropogenically-sourced CO2. Methods include: focused ion beam-scanning electron microscopy; laser scanning confocal microscopy; electron and optical petrography; multi-stress path mechanical testing and constitutive modeling; core examinations of sedimentary structures and fractures; and a noble gas profile for formation-scale transport of the sealing lihologies and the reservoir. We develop relationships between diagenetic characteristics of lithofacies to mechanical and petrophysical measurements of the caprocks. The results are applied as part of a caprock sealing behavior performance assessment. Funding for this project is provided by the U.S. Department of Energy's National Energy Technology Laboratory through the Southwest Regional Partnership on Carbon Sequestration (SWP) under Award No. DE-FC26-05NT42591. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Multi-provider architecture for cloud outsourcing of medical imaging repositories.
Godinho, Tiago Marques; Bastião Silva, Luís A; Costa, Carlos; Oliveira, José Luís
2014-01-01
Over the last few years, the extended usage of medical imaging procedures has raised the medical community attention towards the optimization of their workflows. More recently, the federation of multiple institutions into a seamless distribution network has brought hope of increased quality healthcare services along with more efficient resource management. As a result, medical institutions are constantly looking for the best infrastructure to deploy their imaging archives. In this scenario, public cloud infrastructures arise as major candidates, as they offer elastic storage space, optimal data availability without great requirements of maintenance costs or IT personnel, in a pay-as-you-go model. However, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. This document proposes a multi-provider architecture for integration of outsourced archives with in-house PACS resources, taking advantage of foreign providers to store medical imaging studies, without disregarding security. It enables the retrieval of images from multiple archives simultaneously, improving performance, data availability and avoiding the vendor-locking problem. Moreover it enables load balancing and cache techniques.
Missing Modality Transfer Learning via Latent Low-Rank Constraint.
Ding, Zhengming; Shao, Ming; Fu, Yun
2015-11-01
Transfer learning is usually exploited to leverage previously well-learned source domain for evaluating the unknown target domain; however, it may fail if no target data are available in the training stage. This problem arises when the data are multi-modal. For example, the target domain is in one modality, while the source domain is in another. To overcome this, we first borrow an auxiliary database with complete modalities, then consider knowledge transfer across databases and across modalities within databases simultaneously in a unified framework. The contributions are threefold: 1) a latent factor is introduced to uncover the underlying structure of the missing modality from the known data; 2) transfer learning in two directions allows the data alignment between both modalities and databases, giving rise to a very promising recovery; and 3) an efficient solution with theoretical guarantees to the proposed latent low-rank transfer learning algorithm. Comprehensive experiments on multi-modal knowledge transfer with missing target modality verify that our method can successfully inherit knowledge from both auxiliary database and source modality, and therefore significantly improve the recognition performance even when test modality is inaccessible in the training stage.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fishman, S., E-mail: fishman@physics.technion.ac.il; Soffer, A., E-mail: soffer@math.rutgers.edu
2016-07-15
We employ the recently developed multi-time scale averaging method to study the large time behavior of slowly changing (in time) Hamiltonians. We treat some known cases in a new way, such as the Zener problem, and we give another proof of the adiabatic theorem in the gapless case. We prove a new uniform ergodic theorem for slowly changing unitary operators. This theorem is then used to derive the adiabatic theorem, do the scattering theory for such Hamiltonians, and prove some classical propagation estimates and asymptotic completeness.
Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR
NASA Astrophysics Data System (ADS)
Sidorchuk, D.; Volkov, V.; Gladilin, S.
2018-04-01
This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.
Predicting Upscaled Behavior of Aqueous Reactants in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Wright, E. E.; Hansen, S. K.; Bolster, D.; Richter, D. H.; Vesselinov, V. V.
2017-12-01
When modeling reactive transport, reaction rates are often overestimated due to the improper assumption of perfect mixing at the support scale of the transport model. In reality, fronts tend to form between participants in thermodynamically favorable reactions, leading to segregation of reactants into islands or fingers. When such a configuration arises, reactions are limited to the interface between the reactive solutes. Closure methods for estimating control-volume-effective reaction rates in terms of quantities defined at the control volume scale do not presently exist, but their development is crucial for effective field-scale modeling. We attack this problem through a combination of analytical and numerical means. Specifically, we numerically study reactive transport through an ensemble of realizations of two-dimensional heterogeneous porous media. We then employ regression analysis to calibrate an analytically-derived relationship between reaction rate and various dimensionless quantities representing conductivity-field heterogeneity and the respective strengths of diffusion, reaction and advection.
NASA Astrophysics Data System (ADS)
Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia
2017-10-01
Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capela, Fabio; Ramazanov, Sabir, E-mail: fc403@cam.ac.uk, E-mail: Sabir.Ramazanov@ulb.ac.be
At large scales and for sufficiently early times, dark matter is described as a pressureless perfect fluid—dust— non-interacting with Standard Model fields. These features are captured by a simple model with two scalars: a Lagrange multiplier and another playing the role of the velocity potential. That model arises naturally in some gravitational frameworks, e.g., the mimetic dark matter scenario. We consider an extension of the model by means of higher derivative terms, such that the dust solutions are preserved at the background level, but there is a non-zero sound speed at the linear level. We associate this Modified Dust withmore » dark matter, and study the linear evolution of cosmological perturbations in that picture. The most prominent effect is the suppression of their power spectrum for sufficiently large cosmological momenta. This can be relevant in view of the problems that cold dark matter faces at sub-galactic scales, e.g., the missing satellites problem. At even shorter scales, however, perturbations of Modified Dust are enhanced compared to the predictions of more common particle dark matter scenarios. This is a peculiarity of their evolution in radiation dominated background. We also briefly discuss clustering of Modified Dust. We write the system of equations in the Newtonian limit, and sketch the possible mechanism which could prevent the appearance of caustic singularities. The same mechanism may be relevant in light of the core-cusp problem.« less
Neural decoding of collective wisdom with multi-brain computing.
Eckstein, Miguel P; Das, Koel; Pham, Binh T; Peterson, Matthew F; Abbey, Craig K; Sy, Jocelyn L; Giesbrecht, Barry
2012-01-02
Group decisions and even aggregation of multiple opinions lead to greater decision accuracy, a phenomenon known as collective wisdom. Little is known about the neural basis of collective wisdom and whether its benefits arise in late decision stages or in early sensory coding. Here, we use electroencephalography and multi-brain computing with twenty humans making perceptual decisions to show that combining neural activity across brains increases decision accuracy paralleling the improvements shown by aggregating the observers' opinions. Although the largest gains result from an optimal linear combination of neural decision variables across brains, a simpler neural majority decision rule, ubiquitous in human behavior, results in substantial benefits. In contrast, an extreme neural response rule, akin to a group following the most extreme opinion, results in the least improvement with group size. Analyses controlling for number of electrodes and time-points while increasing number of brains demonstrate unique benefits arising from integrating neural activity across different brains. The benefits of multi-brain integration are present in neural activity as early as 200 ms after stimulus presentation in lateral occipital sites and no additional benefits arise in decision related neural activity. Sensory-related neural activity can predict collective choices reached by aggregating individual opinions, voting results, and decision confidence as accurately as neural activity related to decision components. Estimation of the potential for the collective to execute fast decisions by combining information across numerous brains, a strategy prevalent in many animals, shows large time-savings. Together, the findings suggest that for perceptual decisions the neural activity supporting collective wisdom and decisions arises in early sensory stages and that many properties of collective cognition are explainable by the neural coding of information across multiple brains. Finally, our methods highlight the potential of multi-brain computing as a technique to rapidly and in parallel gather increased information about the environment as well as to access collective perceptual/cognitive choices and mental states. Copyright © 2011 Elsevier Inc. All rights reserved.
Fujisaki, Keisuke; Ikeda, Tomoyuki
2013-01-01
To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395
Toward multiscale modelings of grain-fluid systems
NASA Astrophysics Data System (ADS)
Chareyre, Bruno; Yuan, Chao; Montella, Eduard P.; Salager, Simon
2017-06-01
Computationally efficient methods have been developed for simulating partially saturated granular materials in the pendular regime. In contrast, one hardly avoid expensive direct resolutions of 2-phase fluid dynamics problem for mixed pendular-funicular situations or even saturated regimes. Following previous developments for single-phase flow, a pore-network approach of the coupling problems is described. The geometry and movements of phases and interfaces are described on the basis of a tetrahedrization of the pore space, introducing elementary objects such as bridge, meniscus, pore body and pore throat, together with local rules of evolution. As firmly established local rules are still missing on some aspects (entry capillary pressure and pore-scale pressure-saturation relations, forces on the grains, or kinetics of transfers in mixed situations) a multi-scale numerical framework is introduced, enhancing the pore-network approach with the help of direct simulations. Small subsets of a granular system are extracted, in which multiphase scenario are solved using the Lattice-Boltzman method (LBM). In turns, a global problem is assembled and solved at the network scale, as illustrated by a simulated primary drainage.
ERIC Educational Resources Information Center
Schoel, Jim; Butler, Steve; Murray, Mark; Gass, Mike; Carrick, Moe
2001-01-01
Presents five group problem-solving initiatives for use in adventure and experiential settings, focusing on conflict resolution, corporate workplace issues, or adjustment to change. Includes target group, group size, time and space needs, activity level, overview, goals, props, instructions, and suggestions for framing and debriefing the…
2006-03-01
International Journal of Production Economics , Vol. 93-94, pp. 53-99, 2005. -----. “Approximate...Optimization of a Two-level Distribution Inventory System,” International Journal of Production Economics , Vol. 81-81, pp. 545-553, 2003...Scaling Down Multi-Echelon Inventory Problems,” International Journal of Production Economics , Vol. 71, pp. 255-261, 2001. Axsater, Sven
ERIC Educational Resources Information Center
Gamst-Klaussen, Thor; Rasmussen, Lene-Mari P.; Svartdal, Frode; Strømgren, Børge
2016-01-01
The Social Skills Improvement System-Rating Scales (SSIS-RS) is a multi-informant instrument assessing social skills and problem behavior in children and adolescents. It is a revised version of the Social Skills Rating System (SSRS). A Norwegian translation of the SSRS has been validated, but this has not yet been done for the Norwegian…
A blended continuous–discontinuous finite element method for solving the multi-fluid plasma model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousa, E.M., E-mail: sousae@uw.edu; Shumlak, U., E-mail: shumlak@uw.edu
The multi-fluid plasma model represents electrons, multiple ion species, and multiple neutral species as separate fluids that interact through short-range collisions and long-range electromagnetic fields. The model spans a large range of temporal and spatial scales, which renders the model stiff and presents numerical challenges. To address the large range of timescales, a blended continuous and discontinuous Galerkin method is proposed, where the massive ion and neutral species are modeled using an explicit discontinuous Galerkin method while the electrons and electromagnetic fields are modeled using an implicit continuous Galerkin method. This approach is able to capture large-gradient ion and neutralmore » physics like shock formation, while resolving high-frequency electron dynamics in a computationally efficient manner. The details of the Blended Finite Element Method (BFEM) are presented. The numerical method is benchmarked for accuracy and tested using two-fluid one-dimensional soliton problem and electromagnetic shock problem. The results are compared to conventional finite volume and finite element methods, and demonstrate that the BFEM is particularly effective in resolving physics in stiff problems involving realistic physical parameters, including realistic electron mass and speed of light. The benefit is illustrated by computing a three-fluid plasma application that demonstrates species separation in multi-component plasmas.« less
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
Evaluating multi-level models to test occupancy state responses of Plethodontid salamanders
Kroll, Andrew J.; Garcia, Tiffany S.; Jones, Jay E.; Dugger, Catherine; Murden, Blake; Johnson, Josh; Peerman, Summer; Brintz, Ben; Rochelle, Michael
2015-01-01
Plethodontid salamanders are diverse and widely distributed taxa and play critical roles in ecosystem processes. Due to salamander use of structurally complex habitats, and because only a portion of a population is available for sampling, evaluation of sampling designs and estimators is critical to provide strong inference about Plethodontid ecology and responses to conservation and management activities. We conducted a simulation study to evaluate the effectiveness of multi-scale and hierarchical single-scale occupancy models in the context of a Before-After Control-Impact (BACI) experimental design with multiple levels of sampling. Also, we fit the hierarchical single-scale model to empirical data collected for Oregon slender and Ensatina salamanders across two years on 66 forest stands in the Cascade Range, Oregon, USA. All models were fit within a Bayesian framework. Estimator precision in both models improved with increasing numbers of primary and secondary sampling units, underscoring the potential gains accrued when adding secondary sampling units. Both models showed evidence of estimator bias at low detection probabilities and low sample sizes; this problem was particularly acute for the multi-scale model. Our results suggested that sufficient sample sizes at both the primary and secondary sampling levels could ameliorate this issue. Empirical data indicated Oregon slender salamander occupancy was associated strongly with the amount of coarse woody debris (posterior mean = 0.74; SD = 0.24); Ensatina occupancy was not associated with amount of coarse woody debris (posterior mean = -0.01; SD = 0.29). Our simulation results indicate that either model is suitable for use in an experimental study of Plethodontid salamanders provided that sample sizes are sufficiently large. However, hierarchical single-scale and multi-scale models describe different processes and estimate different parameters. As a result, we recommend careful consideration of study questions and objectives prior to sampling data and fitting models.
Emissivity measurements of shocked tin using a multi-wavelength integrating sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifter, A; Holtkamp, D B; Iverson, A J
Pyrometric measurements of radiance to determine temperature have been performed on shock physics experiments for decades. However, multi-wavelength pyrometry schemes sometimes fail to provide credible temperatures in experiments, which incur unknown changes in sample emissivity, because an emissivity change also affects the spectral radiance. Hence, for shock physics experiments using pyrometry to measure temperatures, it is essential to determine the dynamic sample emissivity. The most robust way to determine the normal spectral emissivity is to measure the spectral normal-hemispherical reflectance using an integrating sphere. In this paper we describe a multi-wavelength (1.6–5.0 μm) integrating sphere system that utilizes a “reversed”more » scheme, which we use for shock physics experiments. The sample to be shocked is illuminated uniformly by scattering broadband light from inside a sphere onto the sample. A portion of the light reflected from the sample is detected at a point 12° from normal to the sample surface. For this experiment, we used the system to measure emissivity of shocked tin at four wavelengths for shock stress values between 17 and 33 GPa. The results indicate a large increase in effective emissivity upon shock release from tin when the shock is above 24–25 GPa, a shock stress that partially melts the sample. We also recorded an IR image of one of the shocked samples through the integrating sphere, and the emissivity inferred from the image agreed well with the integrating-sphere, pyrometer-detector data. Here, we discuss experimental data, uncertainties, and a data analysis process. We also describe unique emissivity-measurement problems arising from shock experiments and methods to overcome such problems.« less
An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH
NASA Astrophysics Data System (ADS)
Lee, D.; Gopal, S.; Mohapatra, P.
2012-07-01
We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.
Approximate registration of point clouds with large scale differences
NASA Astrophysics Data System (ADS)
Novak, D.; Schindler, K.
2013-10-01
3D reconstruction of objects is a basic task in many fields, including surveying, engineering, entertainment and cultural heritage. The task is nowadays often accomplished with a laser scanner, which produces dense point clouds, but lacks accurate colour information, and lacks per-point accuracy measures. An obvious solution is to combine laser scanning with photogrammetric recording. In that context, the problem arises to register the two datasets, which feature large scale, translation and rotation differences. The absence of approximate registration parameters (3D translation, 3D rotation and scale) precludes the use of fine-registration methods such as ICP. Here, we present a method to register realistic photogrammetric and laser point clouds in a fully automated fashion. The proposed method decomposes the registration into a sequence of simpler steps: first, two rotation angles are determined by finding dominant surface normal directions, then the remaining parameters are found with RANSAC followed by ICP and scale refinement. These two steps are carried out at low resolution, before computing a precise final registration at higher resolution.
Orbiter entry aerothermodynamics
NASA Technical Reports Server (NTRS)
Ried, R. C.
1985-01-01
The challenge in the definition of the entry aerothermodynamic environment arising from the challenge of a reliable and reusable Orbiter is reviewed in light of the existing technology. Select problems pertinent to the orbiter development are discussed with reference to comprehensive treatments. These problems include boundary layer transition, leeward-side heating, shock/shock interaction scaling, tile gap heating, and nonequilibrium effects such as surface catalysis. Sample measurements obtained from test flights of the Orbiter are presented with comparison to preflight expectations. Numerical and wind tunnel simulations gave efficient information for defining the entry environment and an adequate level of preflight confidence. The high quality flight data provide an opportunity to refine the operational capability of the orbiter and serve as a benchmark both for the development of aerothermodynamic technology and for use in meeting future entry heating challenges.
Optimistic barrier synchronization
NASA Technical Reports Server (NTRS)
Nicol, David M.
1992-01-01
Barrier synchronization is fundamental operation in parallel computation. In many contexts, at the point a processor enters a barrier it knows that it has already processed all the work required of it prior to synchronization. The alternative case, when a processor cannot enter a barrier with the assurance that it has already performed all the necessary pre-synchronization computation, is treated. The problem arises when the number of pre-sychronization messages to be received by a processor is unkown, for example, in a parallel discrete simulation or any other computation that is largely driven by an unpredictable exchange of messages. We describe an optimistic O(log sup 2 P) barrier algorithm for such problems, study its performance on a large-scale parallel system, and consider extensions to general associative reductions as well as associative parallel prefix computations.
Mizutani, Eiji; Demmel, James W
2003-01-01
This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).
NASA Astrophysics Data System (ADS)
Li, Jie; Guo, LiXin; He, Qiong; Wei, Bing
2012-10-01
An iterative strategy combining Kirchhoff approximation^(KA) with the hybrid finite element-boundary integral (FE-BI) method is presented in this paper to study the interactions between the inhomogeneous object and the underlying rough surface. KA is applied to study scattering from underlying rough surfaces, whereas FE-BI deals with scattering from the above target. Both two methods use updated excitation sources. Huygens equivalence principle and an iterative strategy are employed to consider the multi-scattering effects. This hybrid FE-BI-KA scheme is an improved and generalized version of previous hybrid Kirchhoff approximation-method of moments (KA-MoM). This newly presented hybrid method has the following advantages: (1) the feasibility of modeling multi-scale scattering problems (large scale underlying surface and small scale target); (2) low memory requirement as in hybrid KA-MoM; (3) the ability to deal with scattering from inhomogeneous (including coated or layered) scatterers above rough surfaces. The numerical results are given to evaluate the accuracy of the multi-hybrid technique; the computing time and memory requirements consumed in specific numerical simulation of FE-BI-KA are compared with those of MoM. The convergence performance is analyzed by studying the iteration number variation caused by related parameters. Then bistatic scattering from inhomogeneous object of different configurations above dielectric Gaussian rough surface is calculated and the influences of dielectric compositions and surface roughness on the scattering pattern are discussed.
NASA Astrophysics Data System (ADS)
Kamiran, N.; Sarker, M. L. R.
2014-02-01
The land use/land cover transformation in Malaysia is enormous due to palm oil plantation which has provided huge economical benefits but also created a huge concern for carbon emission and biodiversity. Accurate information about oil palm plantation and the age of plantation is important for a sustainable production, estimation of carbon storage capacity, biodiversity and the climate model. However, the problem is that this information cannot be extracted easily due to the spectral signature for forest and age group of palm oil plantations is similar. Therefore, a noble approach "multi-scale and multi-texture algorithms" was used for mapping vegetation and different age groups of palm oil plantation using a high resolution panchromatic image (WorldView-1) considering the fact that pan imagery has a potential for more detailed and accurate mapping with an effective image processing technique. Seven texture algorithms of second-order Grey Level Co-occurrence Matrix (GLCM) with different scales (from 3×3 to 39×39) were used for texture generation. All texture parameters were classified step by step using a robust classifier "Artificial Neural Network (ANN)". Results indicate that single spectral band was unable to provide good result (overall accuracy = 34.92%), while higher overall classification accuracies (73.48%, 84.76% and 93.18%) were obtained when textural information from multi-scale and multi-texture approach were used in the classification algorithm.
Statistical mechanics of competitive resource allocation using agent-based models
NASA Astrophysics Data System (ADS)
Chakraborti, Anirban; Challet, Damien; Chatterjee, Arnab; Marsili, Matteo; Zhang, Yi-Cheng; Chakrabarti, Bikas K.
2015-01-01
Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.
NASA Astrophysics Data System (ADS)
Habib, Ahmed S.; Bradley, D. A.; Regan, P. H.; Shutt, A. L.
2010-07-01
The accumulation of scales in production pipes is a common problem in the oil industry, reducing fluid flow and also leading to costly remedies and disposal issues. Typical materials found in such scale are sulphates and carbonates of calcium and barium, or iron sulphide. Radium arising from the uranium/thorium present in oil-bearing rock formations may replace the barium or calcium in these salts to form radium salts. This creates what is known as technologically enhanced naturally occurring radioactive material (TENORM or simply NORM). NORM is a serious environmental and health and safety issue arising from commercial oil and gas extraction operations. Whilst a good deal has been published on the characterisation and measurement of radioactive scales from offshore oil production, little information has been published regarding NORM associated with land-based facilities such as that of the Libyan oil industry. The ongoing investigation described in this paper concerns an assessment of NORM from a number of land based Libyan oil fields. A total of 27 pipe scale samples were collected from eight oil fields, from different locations in Libya. The dose rates, measured using a handheld survey meter positioned on sample surfaces, ranged from 0.1-27.3 μSv h -1. In the initial evaluations of the sample activity, use is being made of a portable HPGe based spectrometry system. To comply with the prevailing safety regulations of the University of Surrey, the samples are being counted in their original form, creating a need for correction of non-homogeneous sample geometries. To derive a detection efficiency based on the actual sample geometries, a technique has been developed using a Monte Carlo particle transport code (MCNPX). A preliminary activity determination has been performed using an HPGe portable detector system.
Alternative experiments using the geophysical fluid flow cell
NASA Technical Reports Server (NTRS)
Hart, J. E.
1984-01-01
This study addresses the possibility of doing large scale dynamics experiments using the Geophysical Fluid Flow Cell. In particular, cases where the forcing generates a statically stable stratification almost everywhere in the spherical shell are evaluated. This situation is typical of the Earth's atmosphere and oceans. By calculating the strongest meridional circulation expected in the spacelab experiments, and testing its stability using quasi-geostrophic stability theory, it is shown that strongly nonlinear baroclinic waves on a zonally symmetric modified thermal wind will not occur. The Geophysical Fluid Flow Cell does not have a deep enough fluid layer to permit useful studies of large scale planetary wave processes arising from instability. It is argued, however, that by introducing suitable meridional barriers, a significant contribution to the understanding of the oceanic thermocline problem could be made.
Multi-dimensional optical and laser-based diagnostics of low-temperature ionized plasma discharges
Barnat, Edward V.
2011-09-15
In this paper, a review of work centered on the utilization of multi-dimensional optical diagnostics to study phenomena arising in radiofrequency plasma discharges is given. The diagnostics range from passive techniques such as optical emission to more active techniques utilizing nanosecond lasers capable of both high temporal and spatial resolution. In this review, emphasis is placed on observations that would have been more difficult, if not impossible, to make without the use of such diagnostic techniques. Examples include the sheath structure around an electrode consisting of two different metals, double layers that arise in magnetized hydrogen discharges, or a largemore » region of depleted argon 1s 4 levels around a biased probe in an rf discharge.« less
Postmus, Douwe; Tervonen, Tommi; van Valkenhoef, Gert; Hillege, Hans L; Buskens, Erik
2014-09-01
A standard practice in health economic evaluation is to monetize health effects by assuming a certain societal willingness-to-pay per unit of health gain. Although the resulting net monetary benefit (NMB) is easy to compute, the use of a single willingness-to-pay threshold assumes expressibility of the health effects on a single non-monetary scale. To relax this assumption, this article proves that the NMB framework is a special case of the more general stochastic multi-criteria acceptability analysis (SMAA) method. Specifically, as SMAA does not restrict the number of criteria to two and also does not require the marginal rates of substitution to be constant, there are problem instances for which the use of this more general method may result in a better understanding of the trade-offs underlying the reimbursement decision-making problem. This is illustrated by applying both methods in a case study related to infertility treatment.
Very light dilaton and naturally light Higgs boson
NASA Astrophysics Data System (ADS)
Hong, Deog Ki
2018-02-01
We study very light dilaton, arising from a scale-invariant ultraviolet theory of the Higgs sector in the standard model of particle physics. Imposing the scale symmetry below the ultraviolet scale of the Higgs sector, we alleviate the fine-tuning problem associated with the Higgs mass. When the electroweak symmetry is spontaneously broken radiatively à la Coleman-Weinberg, the dilaton develops a vacuum expectation value away from the origin to give an extra contribution to the Higgs potential so that the Higgs mass becomes naturally around the electroweak scale. The ultraviolet scale of the Higgs sector can be therefore much higher than the electroweak scale, as the dilaton drives the Higgs mass to the electroweak scale. We also show that the light dilaton in this scenario is a good candidate for dark matter of mass m D ˜ 1 eV - 10 keV, if the ultraviolet scale is about 10-100 TeV. Finally we propose a dilaton-assisted composite Higgs model to realize our scenario. In addition to the light dilaton the model predicts a heavy U(1) axial vector boson and two massive, oppositely charged, pseudo Nambu-Goldstone bosons, which might be accessible at LHC.
An Activity-Theoretic Approach to Multi-Touch Tools in Early Mathematics Learning
ERIC Educational Resources Information Center
Ladel, Silke; Kortenkamp, Ulrich
2013-01-01
In this article we present an activity theory based framework that can capture the complex situations that arise when modern technology like multi-touch devices are introduced in classroom situations. As these devices are able to cover more activities than traditional technologies, even computerbased, media, we have to accept that they now take a…
Three-Level Models for Indirect Effects in School- and Class-Randomized Experiments in Education
ERIC Educational Resources Information Center
Pituch, Keenan A.; Murphy, Daniel L.; Tate, Richard L.
2009-01-01
Due to the clustered nature of field data, multi-level modeling has become commonly used to analyze data arising from educational field experiments. While recent methodological literature has focused on multi-level mediation analysis, relatively little attention has been devoted to mediation analysis when three levels (e.g., student, class,…
Absence of Asymptotic Freedom in Doped Mott Insulators: Breakdown of Strong Coupling Expansions
NASA Astrophysics Data System (ADS)
Phillips, Philip; Galanakis, Dimitrios; Stanescu, Tudor D.
2004-12-01
We show that doped Mott insulators such as the copper-oxide superconductors are asymptotically slaved in that the quasiparticle weight Z near half-filling depends critically on the existence of the high-energy scale set by the upper Hubbard band. In particular, near half-filling, the following dichotomy arises: Z≠0 when the high-energy scale is integrated out but Z=0 in the thermodynamic limit when it is retained. Slavery to the high-energy scale arises from quantum interference between electronic excitations across the Mott gap. Broad spectral features seen in photoemission in the normal state of the cuprates are argued to arise from high-energy slavery.
Early-onset Conduct Problems: Predictions from daring temperament and risk taking behavior.
Bai, Sunhye; Lee, Steve S
2017-12-01
Given its considerable public health significance, identifying predictors of early expressions of conduct problems is a priority. We examined the predictive validity of daring, a key dimension of temperament, and the Balloon Analog Risk Task (BART), a laboratory-based measure of risk taking behavior, with respect to two-year change in parent, teacher-, and youth self-reported oppositional defiant disorder (ODD), conduct disorder (CD), and antisocial behavior. At baseline, 150 ethnically diverse 6- to 10-year old (M=7.8, SD=1.1; 69.3% male) youth with ( n =82) and without ( n =68) DSM-IV ADHD completed the BART whereas parents rated youth temperament (i.e., daring); parents and teachers also independently rated youth ODD and CD symptoms. Approximately 2 years later, multi-informant ratings of youth ODD, CD, and antisocial behavior were gathered from rating scales and interviews. Whereas risk taking on the BART was unrelated to conduct problems, individual differences in daring prospectively predicted multi-informant rated conduct problems, independent of baseline risk taking, conduct problems, and ADHD diagnostic status. Early differences in the propensity to show positive socio-emotional responses to risky or novel experiences uniquely predicted escalating conduct problems in childhood, even with control of other potent clinical correlates. We consider the role of temperament in the origins and development of significant conduct problems from childhood to adolescence, including possible explanatory mechanisms underlying these predictions.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
NASA Astrophysics Data System (ADS)
LIU, Yiping; XU, Qing; ZhANG, Heng; LV, Liang; LU, Wanjie; WANG, Dandi
2016-11-01
The purpose of this paper is to solve the problems of the traditional single system for interpretation and draughting such as inconsistent standards, single function, dependence on plug-ins, closed system and low integration level. On the basis of the comprehensive analysis of the target elements composition, map representation and similar system features, a 3D interpretation and draughting integrated service platform for multi-source, multi-scale and multi-resolution geospatial objects is established based on HTML5 and WebGL, which not only integrates object recognition, access, retrieval, three-dimensional display and test evaluation but also achieves collection, transfer, storage, refreshing and maintenance of data about Geospatial Objects and shows value in certain prospects and potential for growth.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
Discontinuities, cross-scale patterns, and the organization of ecosystems
Nash, Kirsty L.; Allen, Craig R.; Angeler, David G.; Barichievy, Chris; Eason, Tarsha; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Knutson, Melinda; Nelson, R. John; Nystrom, Magnus; Stow, Craig A.; Sandstrom, Shana M.
2014-01-01
Ecological structures and processes occur at specific spatiotemporal scales, and interactions that occur across multiple scales mediate scale-specific (e.g., individual, community, local, or regional) responses to disturbance. Despite the importance of scale, explicitly incorporating a multi-scale perspective into research and management actions remains a challenge. The discontinuity hypothesis provides a fertile avenue for addressing this problem by linking measureable proxies to inherent scales of structure within ecosystems. Here we outline the conceptual framework underlying discontinuities and review the evidence supporting the discontinuity hypothesis in ecological systems. Next we explore the utility of this approach for understanding cross-scale patterns and the organization of ecosystems by describing recent advances for examining nonlinear responses to disturbance and phenomena such as extinctions, invasions, and resilience. To stimulate new research, we present methods for performing discontinuity analysis, detail outstanding knowledge gaps, and discuss potential approaches for addressing these gaps.
Transition between inverse and direct energy cascades in multiscale optical turbulence.
Malkin, V M; Fisch, N J
2018-03-01
Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a single scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.
Transition between inverse and direct energy cascades in multiscale optical turbulence
NASA Astrophysics Data System (ADS)
Malkin, V. M.; Fisch, N. J.
2018-03-01
Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a single scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.
Renewed Radio Activity of Age 370 years in the Extragalactic Source 0108+388
NASA Astrophysics Data System (ADS)
Owsianik, I.; Conway, J. E.; Polatidis, A. G.
1998-08-01
We present the results of multi-epoch global VLBI observations of the Compact Symmetric Object (CSO) 0108+388 at 5 GHz. Analysis of data spread over 12 years shows strong evidence for an increase in the separation of the outer components at a rate of 0.197+/-0.026 h(-1) c. Given an overall size of 22.2 h(-1) pc this implies a kinematic age of only 367+/-48 yrs. This result strongly supports the idea that radio emission in Compact Symmetric Objects arises from recently activated radio sources. The presence of weak radio emission on kpc-scales in 0108+388 suggests recurrent activity in this source, and that we are observing it just as a new period of activity is beginning.
A Control of a Mono and Multi Scale Measurement of a Grid
NASA Astrophysics Data System (ADS)
Elloumi, Imene; Ravelomanana, Sahobimaholy; Jelliti, Manel; Sibilla, Michelle; Desprats, Thierry
The capacity to ensure the seamless mobility with the end-to-end Quality of Service (QoS) represents a vital criterion of success in the grid use. In this paper we hence posit a method of monitoring interconnection network of the grid (cluster, local grid and aggregate grids) in order to control its QoS. Such monitoring can guarantee a persistent control of the system state of health, a diagnostic and an optimization pertinent enough for better real time exploitation. A better exploitation is synonymous with identifying networking problems that affect the application domain. This can be carried out by control measurements as well as mono and multi scale for such metrics as: the bandwidth, CPU speed and load. The solution proposed, which is a management generic solution independently from the technologies, aims to automate human expertise and thereby more autonomy.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Moving Toward Space Internetworking via DTN: Its Operational Challenges, Benefits, and Management
NASA Technical Reports Server (NTRS)
Barkley, Erik; Burleigh, Scott; Gladden, Roy; Malhotra, Shan; Shames, Peter
2010-01-01
The international space community has begun to recognize that the established model for management of communications with spacecraft - commanded data transmission over individual pair-wise contacts - is operationally unwieldy and will not scale in support of increasingly complex and sophisticated missions such as NASA's Constellation project. Accordingly, the international Inter-Agency Operations Advisory Group (IOAG) ichartered a Space Internetworking Strategy Group (SISG), which released its initial recommendations in a November 2008 report. The report includes a recommendation that the space flight community adopt Delay-Tolerant Networking (DTN) to address the problem of interoperability and communication scaling, especially in mission environments where there are multiple spacecraft operating in concert. This paper explores some of the issues that must be addressed in implementing, deploying, and operating DTN as part of a multi-mission, multi-agency space internetwork as well as benefits and future operational scenarios afforded by DTN-based space internetworking.
NASA Astrophysics Data System (ADS)
Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan
2017-11-01
Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.
Dissipative closures for statistical moments, fluid moments, and subgrid scales in plasma turbulence
NASA Astrophysics Data System (ADS)
Smith, Stephen Andrew
1997-11-01
Closures are necessary in the study physical systems with large numbers of degrees of freedom when it is only possible to compute a small number of modes. The modes that are to be computed, the resolved modes, are coupled to unresolved modes that must be estimated. This thesis focuses on dissipative closures models for two problems that arises in the study of plasma turbulence: the fluid moment closure problem and the subgrid scale closure problem. The fluid moment closures of Hammett and Perkins (1990) were originally applied to a one-dimensional kinetic equation, the Vlasov equation. These closures are generalized in this thesis and applied to the stochastic oscillator problem, a standard paradigm problem for statistical closures. The linear theory of the Hammett- Perkins closures is shown to converge with increasing numbers of moments. A novel parameterized hyperviscosity is proposed for two- dimensional drift-wave turbulence. The magnitude and exponent of the hyperviscosity are expressed as functions of the large scale advection velocity. Traditionally hyperviscosities are applied to simulations with a fixed exponent that must be arbitrarily chosen. Expressing the exponent as a function of the simulation parameters eliminates this ambiguity. These functions are parameterized by comparing the hyperviscous dissipation to the subgrid dissipation calculated from direct numerical simulations. Tests of the parameterization demonstrate that it performs better than using no additional damping term or than using a standard hyperviscosity. Heuristic arguments are presented to extend this hyperviscosity model to three-dimensional (3D) drift-wave turbulence where eddies are highly elongated along the field line. Preliminary results indicate that this generalized 3D hyperviscosity is capable of reducing the resolution requirements for 3D gyrofluid turbulence simulations.