Generalized vector calculus on convex domain
NASA Astrophysics Data System (ADS)
Agrawal, Om P.; Xu, Yufeng
2015-06-01
In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.
NASA Astrophysics Data System (ADS)
Wang, Min
2017-06-01
This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.
Abstract generalized vector quasi-equilibrium problems in noncompact Hadamard manifolds.
Lu, Haishu; Wang, Zhihua
2017-01-01
This paper deals with the abstract generalized vector quasi-equilibrium problem in noncompact Hadamard manifolds. We prove the existence of solutions to the abstract generalized vector quasi-equilibrium problem under suitable conditions and provide applications to an abstract vector quasi-equilibrium problem, a generalized scalar equilibrium problem, a scalar equilibrium problem, and a perturbed saddle point problem. Finally, as an application of the existence of solutions to the generalized scalar equilibrium problem, we obtain a weakly mixed variational inequality and two mixed variational inequalities. The results presented in this paper unify and generalize many known results in the literature.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
NASA Astrophysics Data System (ADS)
Barbagallo, Annamaria; Di Meglio, Guglielmo; Mauro, Paolo
2017-07-01
The aim of the paper is to study, in a Hilbert space setting, a general random oligopolistic market equilibrium problem in presence of both production and demand excesses and to characterize the random Cournot-Nash equilibrium principle by means of a stochastic variational inequality. Some existence results are presented.
On the characteristic exponents of the general three-body problem
NASA Technical Reports Server (NTRS)
Broucke, R.
1976-01-01
A description is given of some properties of the characteristic exponents of the general three-body problem. The variational equations on which the analysis is based are obtained by linearizing the Lagrangian equations of motion in the neighborhood of a given known solution. Attention is given to the fundamental matrix of solutions, the characteristic equation, the three trivial solutions of the variational equations of the three-body problem, symmetric periodic orbits, and the half-period properties of symmetric periodic orbits.
Singular optimal control and the identically non-regular problem in the calculus of variations
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.
1985-01-01
A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.
NASA Technical Reports Server (NTRS)
Cheyney, H., III; Arking, A.
1976-01-01
The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.
Variation in formulary adherence in general practice over time (2003-2007).
van Dijk, Liset; de Jong, Judith D; Westert, Gert P; de Bakker, Dinny H
2011-12-01
To study trends and variation in adherence to the main national formulary for the 20 most prevalent health problems in Dutch general practice over a 5-year period (2003-07). Routine electronic medical records from a pool of 115 representative general practices were linked to the main national formulary. Analyses included over 2 million prescriptions for 246 391 patients. The outcome variable was whether or not the prescribed medication was congruent with recommendations in the national formulary. Trends and variation were analysed using three-level multilevel logistic regression analyses (general practice, patient, and prescription). The percentage of formulary adherent prescriptions for the 20 most prevalent health problems was 73-76% between 2003 and 2007. The percentage varied considerably between guidelines. Lowest adherence rates were found for acute bronchitis and acute upper respiratory infection. Interpractice variation was constant over time. General practice information networks are useful for monitoring general patterns of formulary on a year-to-year basis. Formulary adherence is stable over time but varies across diagnoses, patients and general practices. In the past decade, efforts have been made to increase the level of formulary adherent prescribing. These general efforts managed to stabilize (variation in) adherence in a field where many other initiatives (e.g. by pharmaceutical companies) are undertaken to influence prescribing behaviour.
Rescuing the MaxEnt treatment for q-generalized entropies
NASA Astrophysics Data System (ADS)
Plastino, A.; Rocca, M. C.
2018-02-01
It has been recently argued that the MaxEnt variational problem would not adequately work for Renyi's and Tsallis' entropies. We constructively show here that this is not so if one formulates the associated variational problem in a more orthodox functional fashion.
A generalized rotationally symmetric case of the centroaffine Minkowski problem
NASA Astrophysics Data System (ADS)
Lu, Jian
2018-05-01
In this paper the centroaffine Minkowski problem, a critical case of the Lp-Minkowski problem in the n + 1 dimensional Euclidean space, is studied. By its variational structure and the method of blow-up analyses, we obtain two sufficient conditions for the existence of solutions, for a generalized rotationally symmetric case of the problem.
The inverse problem of the calculus of variations for discrete systems
NASA Astrophysics Data System (ADS)
Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David
2018-05-01
We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
A Method for the Microanalysis of Pre-Algebra Transfer
ERIC Educational Resources Information Center
Pavlik, Philip I., Jr.; Yudelson, Michael; Koedinger, Kenneth R.
2011-01-01
The objective of this research was to better understand the transfer of learning between different variations of pre-algebra problems. While the authors could have addressed a specific variation that might address transfer, they were interested in developing a general model of transfer, so we gathered data from multiple problem types and their…
Compressed modes for variational problems in mathematics and physics
Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-01-01
This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861
Compressed modes for variational problems in mathematics and physics.
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-11-12
This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.
A note on convergence of solutions of total variation regularized linear inverse problems
NASA Astrophysics Data System (ADS)
Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar
2018-05-01
In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.
Variational principle for the Navier-Stokes equations.
Kerswell, R R
1999-05-01
A variational principle is presented for the Navier-Stokes equations in the case of a contained boundary-driven, homogeneous, incompressible, viscous fluid. Based upon making the fluid's total viscous dissipation over a given time interval stationary subject to the constraint of the Navier-Stokes equations, the variational problem looks overconstrained and intractable. However, introducing a nonunique velocity decomposition, u(x,t)=phi(x,t) + nu(x,t), "opens up" the variational problem so that what is presumed a single allowable point over the velocity domain u corresponding to the unique solution of the Navier-Stokes equations becomes a surface with a saddle point over the extended domain (phi,nu). Complementary or dual variational problems can then be constructed to estimate this saddle point value strictly from above as part of a minimization process or below via a maximization procedure. One of these reduced variational principles is the natural and ultimate generalization of the upper bounding problem developed by Doering and Constantin. The other corresponds to the ultimate Busse problem which now acts to lower bound the true dissipation. Crucially, these reduced variational problems require only the solution of a series of linear problems to produce bounds even though their unique intersection is conjectured to correspond to a solution of the nonlinear Navier-Stokes equations.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
Quasilinear parabolic variational inequalities with multi-valued lower-order terms
NASA Astrophysics Data System (ADS)
Carl, Siegfried; Le, Vy K.
2014-10-01
In this paper, we provide an analytical frame work for the following multi-valued parabolic variational inequality in a cylindrical domain : Find and an such that where is some closed and convex subset, A is a time-dependent quasilinear elliptic operator, and the multi-valued function is assumed to be upper semicontinuous only, so that Clarke's generalized gradient is included as a special case. Thus, parabolic variational-hemivariational inequalities are special cases of the problem considered here. The extension of parabolic variational-hemivariational inequalities to the general class of multi-valued problems considered in this paper is not only of disciplinary interest, but is motivated by the need in applications. The main goals are as follows. First, we provide an existence theory for the above-stated problem under coercivity assumptions. Second, in the noncoercive case, we establish an appropriate sub-supersolution method that allows us to get existence, comparison, and enclosure results. Third, the order structure of the solution set enclosed by sub-supersolutions is revealed. In particular, it is shown that the solution set within the sector of sub-supersolutions is a directed set. As an application, a multi-valued parabolic obstacle problem is treated.
NASA Astrophysics Data System (ADS)
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2014-03-01
We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).
Application of variational and Galerkin equations to linear and nonlinear finite element analysis
NASA Technical Reports Server (NTRS)
Yu, Y.-Y.
1974-01-01
The paper discusses the application of the variational equation to nonlinear finite element analysis. The problem of beam vibration with large deflection is considered. The variational equation is shown to be flexible in both the solution of a general problem and in the finite element formulation. Difficulties are shown to arise when Galerkin's equations are used in the consideration of the finite element formulation of two-dimensional linear elasticity and of the linear classical beam.
NASA Astrophysics Data System (ADS)
Tian, X.; Zhang, Y.
2018-03-01
Herglotz variational principle, in which the functional is defined by a differential equation, generalizes the classical ones defining the functional by an integral. The principle gives a variational principle description of nonconservative systems even when the Lagrangian is independent of time. This paper focuses on studying the Noether's theorem and its inverse of a Birkhoffian system in event space based on the Herglotz variational problem. Firstly, according to the Herglotz variational principle of a Birkhoffian system, the principle of a Birkhoffian system in event space is established. Secondly, its parametric equations and two basic formulae for the variation of Pfaff-Herglotz action of a Birkhoffian system in event space are obtained. Furthermore, the definition and criteria of Noether symmetry of the Birkhoffian system in event space based on the Herglotz variational problem are given. Then, according to the relationship between the Noether symmetry and conserved quantity, the Noether's theorem is derived. Under classical conditions, Noether's theorem of a Birkhoffian system in event space based on the Herglotz variational problem reduces to the classical ones. In addition, Noether's inverse theorem of the Birkhoffian system in event space based on the Herglotz variational problem is also obtained. In the end of the paper, an example is given to illustrate the application of the results.
Finite element analysis of time-independent superconductivity. Ph.D. Thesis Final Report
NASA Technical Reports Server (NTRS)
Schuler, James J.
1993-01-01
The development of electromagnetic (EM) finite elements based upon a generalized four-potential variational principle is presented. The use of the four-potential variational principle allows for downstream coupling of EM fields with the thermal, mechanical, and quantum effects exhibited by superconducting materials. The use of variational methods to model an EM system allows for a greater range of applications than just the superconducting problem. The four-potential variational principle can be used to solve a broader range of EM problems than any of the currently available formulations. It also reduces the number of independent variables from six to four while easily dealing with conductor/insulator interfaces. This methodology was applied to a range of EM field problems. Results from all these problems predict EM quantities exceptionally well and are consistent with the expected physical behavior.
Presymplectic current and the inverse problem of the calculus of variations
NASA Astrophysics Data System (ADS)
Khavkine, Igor
2013-11-01
The inverse problem of the calculus of variations asks whether a given system of partial differential equations (PDEs) admits a variational formulation. We show that the existence of a presymplectic form in the variational bicomplex, when horizontally closed on solutions, allows us to construct a variational formulation for a subsystem of the given PDE. No constraints on the differential order or number of dependent or independent variables are assumed. The proof follows a recent observation of Bridges, Hydon, and Lawson [Math. Proc. Cambridge Philos. Soc. 148(01), 159-178 (2010)] and generalizes an older result of Henneaux [Ann. Phys. 140(1), 45-64 (1982)] from ordinary differential equations (ODEs) to PDEs. Uniqueness of the variational formulation is also discussed.
Application of a Near-Field Water Quality Model.
1979-07-01
VERIFICATION 45 CENTERLINE TEMPERATURE DECRF~A7F 46 LATERAL VARIATION OF CONSTITUENTS 46 VARIATIOtN OF PLUME WIDTH 49 GENERAL ON VERIFICATION 49...40 4 SOME RESULTS OF VARYING THE ENTRAINMENT COEFFICIENT 4’ 5 RESULTS OF VARYING OTHER COEFFICEINT 42 6 GENERAL PLUME CHARACTERISITICS FOR VARIATION... plume ) axis. These profile forms are then integrated within the basic conservation equations. This integration reduces the problem to a one
One-dimensional Gromov minimal filling problem
NASA Astrophysics Data System (ADS)
Ivanov, Alexandr O.; Tuzhilin, Alexey A.
2012-05-01
The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.
Presymplectic current and the inverse problem of the calculus of variations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khavkine, Igor, E-mail: i.khavkine@uu.nl
2013-11-15
The inverse problem of the calculus of variations asks whether a given system of partial differential equations (PDEs) admits a variational formulation. We show that the existence of a presymplectic form in the variational bicomplex, when horizontally closed on solutions, allows us to construct a variational formulation for a subsystem of the given PDE. No constraints on the differential order or number of dependent or independent variables are assumed. The proof follows a recent observation of Bridges, Hydon, and Lawson [Math. Proc. Cambridge Philos. Soc. 148(01), 159–178 (2010)] and generalizes an older result of Henneaux [Ann. Phys. 140(1), 45–64 (1982)]more » from ordinary differential equations (ODEs) to PDEs. Uniqueness of the variational formulation is also discussed.« less
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
Population Problems: A Constituent of General Culture in the 21st Century.
ERIC Educational Resources Information Center
Rath, Ferdinand J. C. M.
1993-01-01
Compares modern population problems with those of previous generations. Examines variations in population problems in different countries and world regions and the ways in which demographic events (e.g., rapid population growth or urbanization) in one region affect other regions. Advocates preparing for demographic changes through education. (DMM)
General Intelligence Predicts Reasoning Ability Even for Evolutionarily Familiar Content
ERIC Educational Resources Information Center
Kaufman, Scott Barry; DeYoung, Colin G.; Reis, Deidre L.; Gray, Jeremy R.
2011-01-01
The existence of general-purpose cognitive mechanisms related to intelligence, which appear to facilitate all forms of problem solving, conflicts with the strong modularity view of the mind espoused by some evolutionary psychologists. The current study assessed the contribution of general intelligence ("g") to explaining variation in…
Variational formulation of hybrid problems for fully 3-D transonic flow with shocks in rotor
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Based on previous research, the unified variable domain variational theory of hybrid problems for rotor flow is extended to fully 3-D transonic rotor flow with shocks, unifying and generalizing the direct and inverse problems. Three variational principles (VP) families were established. All unknown boundaries and flow discontinuities (such as shocks, free trailing vortex sheets) are successfully handled via functional variations with variable domain, converting almost all boundary and interface conditions, including the Rankine Hugoniot shock relations, into natural ones. This theory provides a series of novel ways for blade design or modification and a rigorous theoretical basis for finite element applications and also constitutes an important part of the optimal design theory of rotor bladings. Numerical solutions to subsonic flow by finite elements with self-adapting nodes given in Refs., show good agreement with experimental results.
NASA Astrophysics Data System (ADS)
Uzan, Jean-Philippe
2013-02-01
Fundamental constants play a central role in many modern developments in gravitation and cosmology. Most extensions of general relativity lead to the conclusion that dimensionless constants are actually dynamical fields. Any detection of their variation on sub-Hubble scales would signal a violation of the Einstein equivalence principle and hence a lead to gravity beyond general relativity. On super-Hubble scales, or maybe should we say on super-universe scales, such variations are invoked as a solution to the fine-tuning problem, in connection with an anthropic approach.
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
Evolutionary variational-hemivariational inequalities
NASA Astrophysics Data System (ADS)
Carl, Siegfried; Le, Vy K.; Motreanu, Dumitru
2008-09-01
We consider an evolutionary quasilinear hemivariational inequality under constraints represented by some closed and convex subset. Our main goal is to systematically develop the method of sub-supersolution on the basis of which we then prove existence, comparison, compactness and extremality results. The obtained results are applied to a general obstacle problem. We improve the corresponding results in the recent monograph [S. Carl, V.K. Le, DE Motreanu, Nonsmooth Variational Problems and Their Inequalities. Comparison Principles and Applications, Springer Monogr. Math., Springer, New York, 2007].
An l1-TV algorithm for deconvolution with salt and pepper noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Rodriguez, Paul
2008-01-01
There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.
Zwaanswijk, Wendy; Veen, Violaine C; van Geel, Mitch; Andershed, Henrik; Vedder, Paul
2017-08-01
The current study examines how the bifactor model of the Youth Psychopathic Traits Inventory (YPI) is related to conduct problems in a sample of Dutch adolescents (N = 2,874; 43% female). It addresses to what extent the YPI dimensions explain variance over and above a General Psychopathy factor (i.e., one factor related to all items) and how the general factor and dimensional factors are related to conduct problems. Group differences in these relations for gender, ethnic background, and age were examined. Results showed that the general factor is most important, but dimensions explain variance over and above the general factor. The general factor, and Affective and Lifestyle dimensions, of the YPI were positively related to conduct problems, whereas the Interpersonal dimension was not, after taking the general factor into account. However, across gender, ethnic background, and age, different dimensions were related to conduct problems over and above the general factor. This suggests that all 3 dimensions should be assessed when examining the psychopathy construct. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Modelling vortex-induced fluid-structure interaction.
Benaroya, Haym; Gabbai, Rene D
2008-04-13
The principal goal of this research is developing physics-based, reduced-order, analytical models of nonlinear fluid-structure interactions associated with offshore structures. Our primary focus is to generalize the Hamilton's variational framework so that systems of flow-oscillator equations can be derived from first principles. This is an extension of earlier work that led to a single energy equation describing the fluid-structure interaction. It is demonstrated here that flow-oscillator models are a subclass of the general, physical-based framework. A flow-oscillator model is a reduced-order mechanical model, generally comprising two mechanical oscillators, one modelling the structural oscillation and the other a nonlinear oscillator representing the fluid behaviour coupled to the structural motion.Reduced-order analytical model development continues to be carried out using a Hamilton's principle-based variational approach. This provides flexibility in the long run for generalizing the modelling paradigm to complex, three-dimensional problems with multiple degrees of freedom, although such extension is very difficult. As both experimental and analytical capabilities advance, the critical research path to developing and implementing fluid-structure interaction models entails-formulating generalized equations of motion, as a superset of the flow-oscillator models; and-developing experimentally derived, semi-analytical functions to describe key terms in the governing equations of motion. The developed variational approach yields a system of governing equations. This will allow modelling of multiple d.f. systems. The extensions derived generalize the Hamilton's variational formulation for such problems. The Navier-Stokes equations are derived and coupled to the structural oscillator. This general model has been shown to be a superset of the flow-oscillator model. Based on different assumptions, one can derive a variety of flow-oscillator models.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Quasi-static responses and variational principles in gradient plasticity
NASA Astrophysics Data System (ADS)
Nguyen, Quoc-Son
2016-12-01
Gradient models have been much discussed in the literature for the study of time-dependent or time-independent processes such as visco-plasticity, plasticity and damage. This paper is devoted to the theory of Standard Gradient Plasticity at small strain. A general and consistent mathematical description available for common time-independent behaviours is presented. Our attention is focussed on the derivation of general results such as the description of the governing equations for the global response and the derivation of related variational principles in terms of the energy and the dissipation potentials. It is shown that the quasi-static response under a loading path is a solution of an evolution variational inequality as in classical plasticity. The rate problem and the rate minimum principle are revisited. A time-discretization by the implicit scheme of the evolution equation leads to the increment problem. An increment of the response associated with a load increment is a solution of a variational inequality and satisfies also a minimum principle if the energy potential is convex. The increment minimum principle deals with stables solutions of the variational inequality. Some numerical methods are discussed in view of the numerical simulation of the quasi-static response.
A Transformation Approach to Optimal Control Problems with Bounded State Variables
NASA Technical Reports Server (NTRS)
Hanafy, Lawrence Hanafy
1971-01-01
A technique is described and utilized in the study of the solutions to various general problems in optimal control theory, which are converted in to Lagrange problems in the calculus of variations. This is accomplished by mapping certain properties in Euclidean space onto closed control and state regions. Nonlinear control problems with a unit m cube as control region and unit n cube as state region are considered.
A brief survey of constrained mechanics and variational problems in terms of differential forms
NASA Technical Reports Server (NTRS)
Hermann, Robert
1994-01-01
There has been considerable interest recently in constrained mechanics and variational problems. This is in part due to applied interests (such as 'non-holonomic mechanics in robotics') and in other part due to the fact that several schools of 'pure' mathematics have found that this classical subject is of importance for what they are trying to do. I have made various attempts at developing these subjects since my Lincoln lab days of the late 1950's. In this Chapter, I will sketch a Unified point of view, using Cartan's approach with differential forms. This has the advantage from the C-O-R viewpoint being developed in this Volume that the extension from 'smooth' to 'generalized' data is very systematic and algebraic. (I will only deal with the 'smooth' point of view in this Chapter; I will develop the 'generalized function' material at a later point.) The material presented briefly here about Variational Calculus and Constrained Mechanics can be found in more detail in my books, 'Differential Geometry and the Calculus of Variations', 'Lie Algebras and Quantum Mechanics', and 'Geometry, Physics and Systems'.
A Maximal Element Theorem in FWC-Spaces and Its Applications
Hu, Qingwen; Miao, Yulin
2014-01-01
A maximal element theorem is proved in finite weakly convex spaces (FWC-spaces, in short) which have no linear, convex, and topological structure. Using the maximal element theorem, we develop new existence theorems of solutions to variational relation problem, generalized equilibrium problem, equilibrium problem with lower and upper bounds, and minimax problem in FWC-spaces. The results represented in this paper unify and extend some known results in the literature. PMID:24782672
Gradient descent learning algorithm overview: a general dynamical systems perspective.
Baldi, P
1995-01-01
Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurrent), and using different techniques (backpropagation, variational calculus, adjoint methods, etc.). The general approach can also be applied to derive new algorithms. The author then briefly examines some of the complexity issues and limitations intrinsic to gradient descent learning. Throughout the paper, the author focuses on the problem of trajectory learning.
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
Total generalized variation-regularized variational model for single image dehazing
NASA Astrophysics Data System (ADS)
Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen
2018-04-01
Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.
Explaining brain size variation: from social to cultural brain.
van Schaik, Carel P; Isler, Karin; Burkart, Judith M
2012-05-01
Although the social brain hypothesis has found near-universal acceptance as the best explanation for the evolution of extensive variation in brain size among mammals, it faces two problems. First, it cannot account for grade shifts, where species or complete lineages have a very different brain size than expected based on their social organization. Second, it cannot account for the observation that species with high socio-cognitive abilities also excel in general cognition. These problems may be related. For birds and mammals, we propose to integrate the social brain hypothesis into a broader framework we call cultural intelligence, which stresses the importance of the high costs of brain tissue, general behavioral flexibility and the role of social learning in acquiring cognitive skills. Copyright © 2012 Elsevier Ltd. All rights reserved.
Automatic devices to take water samples and to raise trash screens at weirs
K. G. Reinhart; R. E. Leonard; G. E. Hart
1960-01-01
Experimentation on small watersheds is assuming increasing importance in watershed-management research. Much has been accomplished in developing adequate instrumentation for use in these experiments. Yet many problems still await solution. One difficulty encountered is that small streams are subject to wide variations in flow and that these variations are generally...
Some problems in applications of the linear variational method
NASA Astrophysics Data System (ADS)
Pupyshev, Vladimir I.; Montgomery, H. E.
2015-09-01
The linear variational method is a standard computational method in quantum mechanics and quantum chemistry. As taught in most classes, the general guidance is to include as many basis functions as practical in the variational wave function. However, if it is desired to study the patterns of energy change accompanying the change of system parameters such as the shape and strength of the potential energy, the problem becomes more complicated. We use one-dimensional systems with a particle in a rectangular or in a harmonic potential confined in an infinite rectangular box to illustrate situations where a variational calculation can give incorrect results. These situations result when the energy of the lowest eigenvalue is strongly dependent on the parameters that describe the shape and strength of the potential. The numerical examples described in this work are provided as cautionary notes for practitioners of numerical variational calculations.
Robin problems with a general potential and a superlinear reaction
NASA Astrophysics Data System (ADS)
Papageorgiou, Nikolaos S.; Rădulescu, Vicenţiu D.; Repovš, Dušan D.
2017-09-01
We consider semilinear Robin problems driven by the negative Laplacian plus an indefinite potential and with a superlinear reaction term which need not satisfy the Ambrosetti-Rabinowitz condition. We prove existence and multiplicity theorems (producing also an infinity of smooth solutions) using variational tools, truncation and perturbation techniques and Morse theory (critical groups).
Operant Variability: Some Random Thoughts
ERIC Educational Resources Information Center
Marr, M. Jackson
2012-01-01
Barba's (2012) paper is a serious and thoughtful analysis of a vexing problem in behavior analysis: Just what should count as an operant class and how do people know? The slippery issue of a "generalized operant" or functional response class illustrates one aspect of this problem, and "variation" or "novelty" as an operant appears to fall into…
Weak convergence of a projection algorithm for variational inequalities in a Banach space
NASA Astrophysics Data System (ADS)
Iiduka, Hideaki; Takahashi, Wataru
2008-03-01
Let C be a nonempty, closed convex subset of a Banach space E. In this paper, motivated by Alber [Ya.I. Alber, Metric and generalized projection operators in Banach spaces: Properties and applications, in: A.G. Kartsatos (Ed.), Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, in: Lecture Notes Pure Appl. Math., vol. 178, Dekker, New York, 1996, pp. 15-50], we introduce the following iterative scheme for finding a solution of the variational inequality problem for an inverse-strongly-monotone operator A in a Banach space: x1=x[set membership, variant]C andxn+1=[Pi]CJ-1(Jxn-[lambda]nAxn) for every , where [Pi]C is the generalized projection from E onto C, J is the duality mapping from E into E* and {[lambda]n} is a sequence of positive real numbers. Then we show a weak convergence theorem (Theorem 3.1). Finally, using this result, we consider the convex minimization problem, the complementarity problem, and the problem of finding a point u[set membership, variant]E satisfying 0=Au.
Kolata, Stefan; Light, Kenneth; Townsend, David A; Hale, Gregory; Grossman, Henya C; Matzel, Louis D
2005-11-01
Up to 50% of an individuals' performance across a wide variety of distinct cognitive tests can be accounted for by a single factor (i.e., "general intelligence"). Despite its ubiquity, the processes or mechanisms regulating this factor are a matter of considerable debate. Although it has been hypothesized that working memory may impact cognitive performance across various domains, tests have been inconclusive due to the difficulty in isolating working memory from its overlapping operations, such as verbal ability. We address this problem using genetically diverse mice, which exhibit a trait analogous to general intelligence. The general cognitive abilities of CD-1 mice were found to covary with individuals' working memory capacity, but not with variations in long-term retention. These results provide evidence that independent of verbal abilities, variations in working memory are associated with general cognitive abilities, and further, suggest a conservation across species of mechanisms and/or processes that regulate cognitive abilities.
Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory
NASA Technical Reports Server (NTRS)
Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.
1990-01-01
New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.
“SLIMPLECTIC” INTEGRATORS: VARIATIONAL INTEGRATORS FOR GENERAL NONCONSERVATIVE SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsang, David; Turner, Alec; Galley, Chad R.
2015-08-10
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. Wemore » discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.« less
Variational submanifolds of Euclidean spaces
NASA Astrophysics Data System (ADS)
Krupka, D.; Urban, Z.; Volná, J.
2018-03-01
Systems of ordinary differential equations (or dynamical forms in Lagrangian mechanics), induced by embeddings of smooth fibered manifolds over one-dimensional basis, are considered in the class of variational equations. For a given non-variational system, conditions assuring variationality (the Helmholtz conditions) of the induced system with respect to a submanifold of a Euclidean space are studied, and the problem of existence of these "variational submanifolds" is formulated in general and solved for second-order systems. The variational sequence theory on sheaves of differential forms is employed as a main tool for the analysis of local and global aspects (variationality and variational triviality). The theory is illustrated by examples of holonomic constraints (submanifolds of a configuration Euclidean space) which are variational submanifolds in geometry and mechanics.
Existence of solution for a general fractional advection-dispersion equation
NASA Astrophysics Data System (ADS)
Torres Ledesma, César E.
2018-05-01
In this work, we consider the existence of solution to the following fractional advection-dispersion equation -d/dt ( p {_{-∞}}It^{β }(u'(t)) + q {t}I_{∞}^{β }(u'(t))) + b(t)u = f(t, u(t)),t\\in R where β \\in (0,1) , _{-∞}It^{β } and tI_{∞}^{β } denote left and right Liouville-Weyl fractional integrals of order β respectively, 0
Fast magnetic resonance imaging based on high degree total variation
NASA Astrophysics Data System (ADS)
Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng
2018-04-01
In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.
Some dynamic resource allocation problems in wireless networks
NASA Astrophysics Data System (ADS)
Berry, Randall
2001-07-01
We consider dynamic resource allocation problems that arise in wireless networking. Specifically transmission scheduling problems are studied in cases where a user can dynamically allocate communication resources such as transmission rate and power based on current channel knowledge as well as traffic variations. We assume that arriving data is stored in a transmission buffer, and investigate the trade-off between average transmission power and average buffer delay. A general characterization of this trade-off is given and the behavior of this trade-off in the regime of asymptotically large buffer delays is explored. An extension to a more general utility based quality of service definition is also discussed.
A generalized Condat's algorithm of 1D total variation regularization
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2017-09-01
A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
On the Support of Minimizers of Causal Variational Principles
NASA Astrophysics Data System (ADS)
Finster, Felix; Schiefeneder, Daniela
2013-11-01
A class of causal variational principles on a compact manifold is introduced and analyzed both numerically and analytically. It is proved under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed and explicit analysis of the minimizers. On the sphere, we get a connection to packing problems and the Tammes distribution. Moreover, the minimal action is estimated from above and below.
NASA Technical Reports Server (NTRS)
Low, B. C.; Tsinganos, K.
1986-01-01
In the case of an establishment of theoretical models of the hydromagnetic solar wind, the inclusion of the effects of the magnetic field in the solar wind makes it extremely dificult to solve the mathematical problem. This paper has the objective to present a set of particular analytic solutions. The general formulation of Tsinganos (1982) is used to identify a class of analytic solutions to the equations of steady hydromagnetic flows in spherical coordinates. Flow in an open magnetic field are studied, taking into account the problem in dimensionless form, the special case of radial flows with alpha = 0, general radial flows, illustrative examples for flows in which alpha is not equal to 0, a parametric study of nonradial flows in which alpha is not equal to zero, variations in the parameter nu, and variations in the initial speed eta.
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Semismooth Newton method for gradient constrained minimization problem
NASA Astrophysics Data System (ADS)
Anyyeva, Serbiniyaz; Kunisch, Karl
2012-08-01
In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.
General dental practitioner's views on dental general anaesthesia services.
Threlfall, A G; King, D; Milsom, K M; Blinkhom, A S; Tickle, M
2007-06-01
Policy has recently changed on provision of dental general anaesthetic services in England. The aim of this study was to investigate general dental practitioners' views about dental general anaesthetics, the reduction in its availability and the impact on care of children with toothache. Qualitative study using semi-structured interviews and clinical case scenarios. General dental practitioners providing NHS services in the North West of England. 93 general dental practitioners were interviewed and 91 answered a clinical case scenario about the care they would provide for a 7-year-old child with multiple decayed teeth presenting with toothache. Scenario responses showed variation; 8% would immediately refer for general anaesthesia, 25% would initially prescribe antibiotics, but the majority would attempt to either restore or extract the tooth causing pain. Interview responses also demonstrated variation in care, however most dentists agree general anaesthesia has a role for nervous children but only refer as a last resort. The responses indicated an increase in inequalities, and that access to services did not match population needs, leaving some children waiting in pain. Most general dental practitioners support moving dental general anaesthesia into hospitals but some believe that it has widened health inequalities and there is also a problem associated with variation in treatment provision. Additional general anaesthetic services in some areas with high levels of tooth decay are needed and evidence based guidelines about caring for children with toothache are required.
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
NASA Technical Reports Server (NTRS)
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons with available analytical and numerical results, the stability and high accuracy of these dynamic analysis techniques are established.
Pseudospectral collocation methods for fourth order differential equations
NASA Technical Reports Server (NTRS)
Malek, Alaeddin; Phillips, Timothy N.
1994-01-01
Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.
Variational approach to direct and inverse problems of atmospheric pollution studies
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2016-04-01
We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition quality// Russian meteorology and hydrology, V. 40, Issue: 6, Pages: 365-373, DOI: 10.3103/S1068373915060023. 4. A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. 5. V.V. Penenko, E.A. Tsvetova, A.V. Penenko Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, 2014, V.67, Issue 12, Pages 2240-2256, DOI:10.1016/j.camwa.2014.04.004 6. V.V. Penenko, E.A. Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220, DOI 10.1134/S199542391303004X
An iterative algorithm for L1-TV constrained regularization in image restoration
NASA Astrophysics Data System (ADS)
Chen, K.; Loli Piccolomini, E.; Zama, F.
2015-11-01
We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
Analysis of quantum information processors using quantum metrology
NASA Astrophysics Data System (ADS)
Kandula, Mark J.; Kok, Pieter
2018-06-01
Physical implementations of quantum information processing devices are generally not unique, and we are faced with the problem of choosing the best implementation. Here, we consider the sensitivity of quantum devices to variations in their different components. To measure this, we adopt a quantum metrological approach and find that the sensitivity of a device to variations in a component has a particularly simple general form. We use the concept of cost functions to establish a general practical criterion to decide between two different physical implementations of the same quantum device consisting of a variety of components. We give two practical examples of sensitivities of quantum devices to variations in beam splitter transmittivities: the Knill-Laflamme-Milburn (KLM) and reverse nonlinear sign gates for linear optical quantum computing with photonic qubits, and the enhanced optical Bell detectors by Grice and Ewert and van Loock. We briefly compare the sensitivity to the diamond distance and find that the latter is less suited for studying the behavior of components embedded within the larger quantum device.
ten Have, Margreet; Oldehinkel, Albertine; Vollebergh, Wilma; Ormel, Johan
2005-06-01
Little is known about the role of personality characteristics in service utilisation for mental health problems. We investigate whether neuroticism: 1) predicts the use of primary and specialised care services for mental health problems, independently of whether a person has an emotional disorder; and 2) modifies any association between emotional disorder and service use. Data were derived from the Netherlands Mental Health Survey and Incidence Study (NEMESIS) a prospective cohort study in the general population aged 18-64. Neuroticism was recorded at baseline, and emotional disorder and service use at 12-month follow-up, in a representative sample (N=7076), using the Composite International Diagnostic Interview. People with high neuroticism were more likely to receive care in the specialised mental health sector, and after entry to care they made more visits to the services, whether or not they had an emotional disorder. If they had an emotional disorder, their likelihood of receiving specialised mental health care showed an additional increase. Neuroticism also predicted the use of primary care for mental health problems, but greater numbers of visits were made only by clients with both high neuroticism and an emotional disorder. It would be useful to incorporate personality characteristics into models to understand variations in service utilisation for mental health problems. The findings suggest that professionals would be wise to focus not just on their clients' emotional problems and disorders, but also on strengthening their problem-solving abilities through approaches like cognitive behavioural therapy.
A Generalization of Snell’s Law
1990-06-01
Weinstock [Ref. 2:pp. 20-22]. We know that f(x+h) - f(x) = f’(x)h + higher order terms in h. We apply this to our simplest variational problem by considering...higher order terms . We also define 6J[h] to be the first variation : 8J[h] =f’b (Fy h + Fyh ’)& Setting the first variation equal to zero, and integrating...tC.’~.’’ 82 16. SUPPLEMENTARY NOTATION o r-’ ~ A I~ ~rt~i ra -Ir C’t t.t ) o 17, COSATI CODES 18, SyBJECT TERMS (Continue on reverse if necessary
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
Application of SEAWAT to select variable-density and viscosity problems
Dausman, Alyssa M.; Langevin, Christian D.; Thorne, Danny T.; Sukop, Michael C.
2010-01-01
SEAWAT is a combined version of MODFLOW and MT3DMS, designed to simulate three-dimensional, variable-density, saturated groundwater flow. The most recent version of the SEAWAT program, SEAWAT Version 4 (or SEAWAT_V4), supports equations of state for fluid density and viscosity. In SEAWAT_V4, fluid density can be calculated as a function of one or more MT3DMS species, and optionally, fluid pressure. Fluid viscosity is calculated as a function of one or more MT3DMS species, and the program also includes additional functions for representing the dependence of fluid viscosity on temperature. This report documents testing of and experimentation with SEAWAT_V4 with six previously published problems that include various combinations of density-dependent flow due to temperature variations and/or concentration variations of one or more species. Some of the problems also include variations in viscosity that result from temperature differences in water and oil. Comparisons between the results of SEAWAT_V4 and other published results are generally consistent with one another, with minor differences considered acceptable.
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.
1987-01-01
A concept for optimally designing output feedback controllers for plants whose dynamics exhibit gross changes over their operating regimes was developed. This was to formulate the design problem in such a way that the implemented feedback gains vary as the output of a dynamical system whose independent variable is a scalar parameterization of the plant operating point. The results of this effort include derivation of necessary conditions for optimality for the general problem formulation, and for several simplified cases. The question of existence of a solution to the design problem was also examined, and it was shown that the class of gain variation schemes developed are capable of achieving gain variation histories which are arbitrarily close to the unconstrained gain solution for each point in the plant operating range. The theory was implemented in a feedback design algorithm, which was exercised in a numerical example. The results are applicable to the design of practical high-performance feedback controllers for plants whose dynamics vary significanly during operation. Many aerospace systems fall into this category.
Sg, Prem Kumar; G, Anil Kumar; Sp, Ramgopal; V, Venkata Srinivas; Dandona, Rakhi
2016-09-21
Data on mental health among orphaned children in India are scanty. We compared the generalized anxiety, conduct and peer relationship problems and their associated risk factors among children orphaned by HIV/AIDS and those due to other reasons in the Indian city of Hyderabad. Four hundred orphaned children aged 12 to 16 years residing in orphanages in Hyderabad were sampled, half being AIDS orphans (COA) and the rest orphaned due to other reasons (COO). Interviews were done using standardized scales to assess generalized anxiety, conduct and peer relationship problems. A score >8, >4, and >5 was considered as indicator of generalized anxiety, conduct problem and peer relationship problem, respectively. Variations in the intensity of these three conditions due to possible factors including co-existing depression were assessed using multiple classification analysis (MCA). A total of 396 (99.3 %) orphans participated of whom 199 (50.3 %) were COA. The mean generalized anxiety, conduct and peer relationship problem scores were 11.1 (SD 5.2), 3.8 (SD 2.5) and 3.8 (SD 2.5) for COA; and 7.6 (SD 4), 2.6 (SD 2) and 2.3 (SD 1.8) for COO, respectively. Among COA, the prevalence of generalized anxiety score of >8 was 74.4 % (95 % CI 67.8-80.0 %), of conduct problem score of >4 was 33.2 % (95 % CI 26.9-40.1 %), and of peer relationship problem score of >5 was 27.6 %, (95 % CI 21.8-34.3 %), with these being significantly lower in COO. In MCA, a higher mean depression score had the highest effect on the intensity of generalized anxiety, conduct and peer relationship problem (Beta 0.477; 0.379 and 0.453 respectively); being COA and a girl had the most impact on generalized anxiety (0.100 and 0.115, respectively). A significantly high proportion of AIDS orphans deal with generalized anxiety, conduct and peer relationship problem as compared with other orphans highlighting the need to address the poor mental health of orphans in India.
Full three-body problem in effective-field-theory models of gravity
NASA Astrophysics Data System (ADS)
Battista, Emmanuele; Esposito, Giampiero
2014-10-01
Recent work in the literature has studied the restricted three-body problem within the framework of effective-field-theory models of gravity. This paper extends such a program by considering the full three-body problem, when the Newtonian potential is replaced by a more general central potential which depends on the mutual separations of the three bodies. The general form of the equations of motion is written down, and they are studied when the interaction potential reduces to the quantum-corrected central potential considered recently in the literature. A recursive algorithm is found for solving the associated variational equations, which describe small departures from given periodic solutions of the equations of motion. Our scheme involves repeated application of a 2×2 matrix of first-order linear differential operators.
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
Variation in clinical coding lists in UK general practice: a barrier to consistent data entry?
Tai, Tracy Waize; Anandarajah, Sobanna; Dhoul, Neil; de Lusignan, Simon
2007-01-01
Routinely collected general practice computer data are used for quality improvement; poor data quality including inconsistent coding can reduce their usefulness. To document the diversity of data entry systems currently in use in UK general practice and highlight possible implications for data quality. General practice volunteers provided screen shots of the clinical coding screen they would use to code a diagnosis or problem title in the clinical consultation. The six clinical conditions examined were: depression, cystitis, type 2 diabetes mellitus, sore throat, tired all the time, and myocardial infarction. We looked at the picking lists generated for these problem titles in EMIS, IPS, GPASS and iSOFT general practice clinical computer systems, using the Triset browser as a gold standard for comparison. A mean of 19.3 codes is offered in the picking list after entering a diagnosis or problem title. EMIS produced the longest picking lists and GPASS the shortest, with a mean number of choices of 35.2 and 12.7, respectively. Approximately three-quarters (73.5%) of codes are diagnoses, one-eighth (12.5%) symptom codes, and the remainder come from a range of Read chapters. There was no readily detectable consistent order in which codes were displayed. Velocity coding, whereby commonly-used codes are placed higher in the picking list, results in variation between practices even where they have the same brand of computer system. Current systems for clinical coding promote diversity rather than consistency of clinical coding. As the UK moves towards an integrated health IT system consistency of coding will become more important. A standardised, limited list of codes for primary care might help address this need.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Woods, H Arthur; Dillon, Michael E; Pincebourde, Sylvain
2015-12-01
We analyze the effects of changing patterns of thermal availability, in space and time, on the performance of small ectotherms. We approach this problem by breaking it into a series of smaller steps, focusing on: (1) how macroclimates interact with living and nonliving objects in the environment to produce a mosaic of thermal microclimates and (2) how mobile ectotherms filter those microclimates into realized body temperatures by moving around in them. Although the first step (generation of mosaics) is conceptually straightforward, there still exists no general framework for predicting spatial and temporal patterns of microclimatic variation. We organize potential variation along three axes-the nature of the objects producing the microclimates (abiotic versus biotic), how microclimates translate macroclimatic variation (amplify versus buffer), and the temporal and spatial scales over which microclimatic conditions vary (long versus short). From this organization, we propose several general rules about patterns of microclimatic diversity. To examine the second step (behavioral sampling of locally available microclimates), we construct a set of models that simulate ectotherms moving on a thermal landscape according to simple sets of diffusion-based rules. The models explore the effects of both changes in body size (which affect the time scale over which organisms integrate operative body temperatures) and increases in the mean and variance of temperature on the thermal landscape. Collectively, the models indicate that both simple behavioral rules and interactions between body size and spatial patterns of thermal variation can profoundly affect the distribution of realized body temperatures experienced by ectotherms. These analyses emphasize the rich set of problems still to solve before arriving at a general, predictive theory of the biological consequences of climate change. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Totani, Tomonori
2017-10-01
In standard general relativity the universe cannot be started with arbitrary initial conditions, because four of the ten components of the Einstein's field equations (EFE) are constraints on initial conditions. In the previous work it was proposed to extend the gravity theory to allow free initial conditions, with a motivation to solve the cosmological constant problem. This was done by setting four constraints on metric variations in the action principle, which is reasonable because the gravity's physical degrees of freedom are at most six. However, there are two problems about this theory; the three constraints in addition to the unimodular condition were introduced without clear physical meanings, and the flat Minkowski spacetime is unstable against perturbations. Here a new set of gravitational field equations is derived by replacing the three constraints with new ones requiring that geodesic paths remain geodesic against metric variations. The instability problem is then naturally solved. Implications for the cosmological constant Λ are unchanged; the theory converges into EFE with nonzero Λ by inflation, but Λ varies on scales much larger than the present Hubble horizon. Then galaxies are formed only in small Λ regions, and the cosmological constant problem is solved by the anthropic argument. Because of the increased degrees of freedom in metric dynamics, the theory predicts new non-oscillatory modes of metric anisotropy generated by quantum fluctuation during inflation, and CMB B -mode polarization would be observed differently from the standard predictions by general relativity.
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Ohayon, Roger
1991-01-01
A general three-field variational principle is obtained for the motion of an acoustic fluid enclosed in a rigid or flexible container by the method of canonical decomposition applied to a modified form of the wave equation in the displacement potential. The general principle is specialized to a mixed two-field principle that contains the fluid displacement potential and pressure as independent fields. This principle contains a free parameter alpha. Semidiscrete finite-element equations of motion based on this principle are displayed and applied to the transient response and free-vibrations of the coupled fluid-structure problem. It is shown that a particular setting of alpha yields a rich set of formulations that can be customized to fit physical and computational requirements. The variational principle is then extended to handle slosh motions in a uniform gravity field, and used to derive semidiscrete equations of motion that account for such effects.
A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem
2013-02-01
obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where
NASA Astrophysics Data System (ADS)
Zeng, Shengda; Migórski, Stanisław
2018-03-01
In this paper a class of elliptic hemivariational inequalities involving the time-fractional order integral operator is investigated. Exploiting the Rothe method and using the surjectivity of multivalued pseudomonotone operators, a result on existence of solution to the problem is established. Then, this abstract result is applied to provide a theorem on the weak solvability of a fractional viscoelastic contact problem. The process is quasistatic and the constitutive relation is modeled with the fractional Kelvin-Voigt law. The friction and contact conditions are described by the Clarke generalized gradient of nonconvex and nonsmooth functionals. The variational formulation of this problem leads to a fractional hemivariational inequality.
Variational asymptotic modeling of composite dimensionally reducible structures
NASA Astrophysics Data System (ADS)
Yu, Wenbin
A general framework to construct accurate reduced models for composite dimensionally reducible structures (beams, plates and shells) was formulated based on two theoretical foundations: decomposition of the rotation tensor and the variational asymptotic method. Two engineering software systems, Variational Asymptotic Beam Sectional Analysis (VABS, new version) and Variational Asymptotic Plate and Shell Analysis (VAPAS), were developed. Several restrictions found in previous work on beam modeling were removed in the present effort. A general formulation of Timoshenko-like cross-sectional analysis was developed, through which the shear center coordinates and a consistent Vlasov model can be obtained. Recovery relations are given to recover the asymptotic approximations for the three-dimensional field variables. A new version of VABS has been developed, which is a much improved program in comparison to the old one. Numerous examples are given for validation. A Reissner-like model being as asymptotically correct as possible was obtained for composite plates and shells. After formulating the three-dimensional elasticity problem in intrinsic form, the variational asymptotic method was used to systematically reduce the dimensionality of the problem by taking advantage of the smallness of the thickness. The through-the-thickness analysis is solved by a one-dimensional finite element method to provide the stiffnesses as input for the two-dimensional nonlinear plate or shell analysis as well as recovery relations to approximately express the three-dimensional results. The known fact that there exists more than one theory that is asymptotically correct to a given order is adopted to cast the refined energy into a Reissner-like form. A two-dimensional nonlinear shell theory consistent with the present modeling process was developed. The engineering computer code VAPAS was developed and inserted into DYMORE to provide an efficient and accurate analysis of composite plates and shells. Numerical results are compared with the exact solutions, and the excellent agreement proves that one can use VAPAS to analyze composite plates and shells efficiently and accurately. In conclusion, rigorous modeling approaches were developed for composite beams, plates and shells within a general framework. No such consistent and general treatment is found in the literature. The associated computer programs VABS and VAPAS are envisioned to have many applications in industry.
A Study of Intonation in the Soccer Results.
ERIC Educational Resources Information Center
Bonnet, G.
1980-01-01
Reports a study which illustrates that a listener can anticipate the score of the opposing team in sports match results from the variation in the announcer's intonation. Investigates how reliable this prediction is and what linguistic features it involves. Relates these findings to general problems in intonation contour interpretation. (PMJ)
A survey of the theory of the Earth's rotation
NASA Technical Reports Server (NTRS)
Cannon, W. H.
1981-01-01
The theory of the Earth's rotation and the geophysical phenomena affecting it is examined. First principles are reviewed and the problem of polar motion and UT1 variations is formulated in considerable generality and detail. The effects of Earth deformations and the solid Earth tides are analyzed.
Fiestas, Fabian; Radovanovic, Mirjana; Martins, Silvia S; Medina-Mora, Maria E; Posada-Villa, Jose; Anthony, James C
2010-03-23
Epidemiological studies show wide variability in the occurrence of cannabis smoking and related disorders across countries. This study aims to estimate cross-national variation in cannabis users' experience of clinically significant cannabis-related problems in three countries of the Americas, with a focus on cannabis users who may have tried alcohol or tobacco, but who have not used cocaine, heroin, LSD, or other internationally regulated drugs. Data are from the World Mental Health Surveys Initiative and the National Latino and Asian American Study, with probability samples in Mexico (n = 4426), Colombia (n = 5,782) and the United States (USA; n = 8,228). The samples included 212 'cannabis only' users in Mexico, 260 in Colombia and 1,724 in the USA. Conditional GLM with GEE and 'exact' methods were used to estimate variation in the occurrence of clinically significant problems in cannabis only (CO) users across these surveyed populations. The experience of cannabis-related problems was quite infrequent among CO users in these countries, with weighted frequencies ranging from 1% to 5% across survey populations, and with no appreciable cross-national variation in general. CO users in Colombia proved to be an exception. As compared to CO users in the USA, the Colombia smokers were more likely to have experienced cannabis-associated 'social problems' (odds ratio, OR = 3.0; 95% CI = 1.4, 6.3; p = 0.004) and 'legal problems' (OR = 9.7; 95% CI = 2.7, 35.2; p = 0.001). This study's most remarkable finding may be the similarity in occurrence of cannabis-related problems in this cross-national comparison within the Americas. Wide cross-national variations in estimated population-level cumulative incidence of cannabis use disorders may be traced to large differences in cannabis smoking prevalence, rather than qualitative differences in cannabis experiences. More research is needed to identify conditions that might make cannabis-related social and legal problems more frequent in Colombia than in the USA.
Towards a richer evolutionary game theory
McNamara, John M.
2013-01-01
Most examples of the application of evolutionary game theory to problems in biology involve highly simplified models. I contend that it is time to move on and include much more richness in models. In particular, more thought needs to be given to the importance of (i) between-individual variation; (ii) the interaction between individuals, and hence the process by which decisions are reached; (iii) the ecological and life-history context of the situation; (iv) the traits that are under selection, and (v) the underlying psychological mechanisms that lead to behaviour. I give examples where including variation between individuals fundamentally changes predicted outcomes of a game. Variation also selects for real-time responses, again resulting in changed outcomes. Variation can select for other traits, such as choosiness and social sensitivity. More generally, many problems involve coevolution of more than one trait. I identify situations where a reductionist approach, in which a game is isolated from is ecological setting, can be misleading. I also highlight the need to consider flexibility of behaviour, mental states and other issues concerned with the evolution of mechanism. PMID:23966616
Hawks, Zoë W; Marrus, Natasha; Glowinski, Anne L; Constantino, John N
2018-03-16
Previous research has suggested that behavioral comorbidity is the rule rather than the exception in autism. The present study aimed to trace the respective origins of autistic and general psychopathologic traits-and their association-to infancy. Measurements of autistic traits and early liability for general psychopathology were assessed in 314 twins at 18 months, ascertained from the general population using birth records. 222 twins were re-evaluated at 36 months. Standardized ratings of variation in social communication at 18 months were highly heritable and strongly predicted autistic trait scores at 36 months. These early indices of autistic liability were independent from contemporaneous ratings of behavior problems on the Brief Infant-Toddler Social and Emotional Assessment (which were substantially environmentally-influenced), and did not meaningfully predict internalizing or externalizing scores on the Achenbach Scales of Empirically Based Assessment at 36 months. In this general population infant twin study, variation in social communication was independent from variation in other domains of general psychopathology, and exhibited a distinct genetic structure. The commonly-observed comorbidity of specific psychiatric syndromes with autism may arise from subsequent interactions between autistic liability and independent susceptibilities to other psychopathologic traits, suggesting opportunities for preventive amelioration of outcomes of these interactions over the course of development.
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
Wang, Zhen; Li, Ru; Yu, Guolin
2017-01-01
In this work, several extended approximately invex vector-valued functions of higher order involving a generalized Jacobian are introduced, and some examples are presented to illustrate their existences. The notions of higher-order (weak) quasi-efficiency with respect to a function are proposed for a multi-objective programming. Under the introduced generalization of higher-order approximate invexities assumptions, we prove that the solutions of generalized vector variational-like inequalities in terms of the generalized Jacobian are the generalized quasi-efficient solutions of nonsmooth multi-objective programming problems. Moreover, the equivalent conditions are presented, namely, a vector critical point is a weakly quasi-efficient solution of higher order with respect to a function.
2002-01-01
1998. [36] T. Sakai, Riemannian Geometry, AMS Translations of Mathematical Monographs, vol 149. [37] N. Sochen, R . Kimmel, and R , Malladi , “A general...matical Physics 107, pp. 649-705, 1986. [5] V. Caselles, R . Kimmel, G. Sapiro, and C. Sbert, “Minimal surfaces based object segmentation,” IEEE- PAMI...June 2000 [9] R . Cohen, R . M. Hardt, D. Kinderlehrer, S. Y. Lin, and M. Luskin, “Minimum energy configurations for liquid crystals: Computational
NASA Astrophysics Data System (ADS)
Ladiges, Daniel R.; Sader, John E.
2018-05-01
Nanomechanical resonators and sensors, operated in ambient conditions, often generate low-Mach-number oscillating rarefied gas flows. Cercignani [C. Cercignani, J. Stat. Phys. 1, 297 (1969), 10.1007/BF01007482] proposed a variational principle for the linearized Boltzmann equation, which can be used to derive approximate analytical solutions of steady (time-independent) flows. Here we extend and generalize this principle to unsteady oscillatory rarefied flows and thus accommodate resonating nanomechanical devices. This includes a mathematical approach that facilitates its general use and allows for systematic improvements in accuracy. This formulation is demonstrated for two canonical flow problems: oscillatory Couette flow and Stokes' second problem. Approximate analytical formulas giving the bulk velocity and shear stress, valid for arbitrary oscillation frequency, are obtained for Couette flow. For Stokes' second problem, a simple system of ordinary differential equations is derived which may be solved to obtain the desired flow fields. Using this framework, a simple and accurate formula is provided for the shear stress at the oscillating boundary, again for arbitrary frequency, which may prove useful in application. These solutions are easily implemented on any symbolic or numerical package, such as Mathematica or matlab, facilitating the characterization of flows produced by nanomechanical devices and providing insight into the underlying flow physics.
Wetting of flat gradient surfaces.
Bormashenko, Edward
2018-04-01
Gradient, chemically modified, flat surfaces enable directed transport of droplets. Calculation of apparent contact angles inherent for gradient surfaces is challenging even for atomically flat ones. Wetting of gradient, flat solid surfaces is treated within the variational approach, under which the contact line is free to move along the substrate. Transversality conditions of the variational problem give rise to the generalized Young equation valid for gradient solid surfaces. The apparent (equilibrium) contact angle of a droplet, placed on a gradient surface depends on the radius of the contact line and the values of derivatives of interfacial tensions. The linear approximation of the problem is considered. It is demonstrated that the contact angle hysteresis is inevitable on gradient surfaces. Electrowetting of gradient surfaces is discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
Cyclical Dynamics in Idiosyncratic Labor Market Risk.
ERIC Educational Resources Information Center
Storesletten, Kjetil; Telmer, Chris I.; Yaron, Amir
2004-01-01
Is individual labor income more risky in recessions? This is a difficult question to answer because existing panel data sets are so short. To address this problem, we develop a generalized method of moments estimator that conditions on the macroeeonomic history that each member of the panel has experienced. Variation in the cross-sectional…
Highly accurate symplectic element based on two variational principles
NASA Astrophysics Data System (ADS)
Qing, Guanghui; Tian, Jia
2018-02-01
For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.
Eisenberg, Daniel; Hunt, Justin; Speer, Nicole
2013-01-01
We estimated the prevalence and correlates of mental health problems among college students in the United States. In 2007 and 2009, we administered online surveys with brief mental health screens to random samples of students at 26 campuses nationwide. We used sample probability weights to adjust for survey nonresponse. A total of 14,175 students completed the survey, corresponding to a 44% participation rate. The prevalence of positive screens was 17.3% for depression, 4.1% for panic disorder, 7.0% for generalized anxiety, 6.3% for suicidal ideation, and 15.3% for nonsuicidal self-injury. Mental health problems were significantly associated with sex, race/ethnicity, religiosity, relationship status, living on campus, and financial situation. The prevalence of conditions varied substantially across the campuses, although campus-level variation was still a small proportion of overall variation in student mental health. The findings offer a starting point for identifying individual and contextual factors that may be useful to target in intervention strategies.
Mathematical theory of a relaxed design problem in structural optimization
NASA Technical Reports Server (NTRS)
Kikuchi, Noboru; Suzuki, Katsuyuki
1990-01-01
Various attempts have been made to construct a rigorous mathematical theory of optimization for size, shape, and topology (i.e. layout) of an elastic structure. If these are represented by a finite number of parametric functions, as Armand described, it is possible to construct an existence theory of the optimum design using compactness argument in a finite dimensional design space or a closed admissible set of a finite dimensional design space. However, if the admissible design set is a subset of non-reflexive Banach space such as L(sup infinity)(Omega), construction of the existence theory of the optimum design becomes suddenly difficult and requires to extend (i.e. generalize) the design problem to much more wider class of design that is compatible to mechanics of structures in the sense of variational principle. Starting from the study by Cheng and Olhoff, Lurie, Cherkaev, and Fedorov introduced a new concept of convergence of design variables in a generalized sense and construct the 'G-Closure' theory of an extended (relaxed) optimum design problem. A similar attempt, but independent in large extent, can also be found in Kohn and Strang in which the shape and topology optimization problem is relaxed to allow to use of perforated composites rather than restricting it to usual solid structures. An identical idea is also stated in Murat and Tartar using the notion of the homogenization theory. That is, introducing possibility of micro-scale perforation together with the theory of homogenization, the optimum design problem is relaxed to construct its mathematical theory. It is also noted that this type of relaxed design problem is perfectly matched to the variational principle in structural mechanics.
Coupled Structural, Thermal, Phase-change and Electromagnetic Analysis for Superconductors, Volume 2
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.
1996-01-01
Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromag subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermel and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. Volume 1 describes mostly formulation specific problems. Volume 2 describes generalization of those formulations.
Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction
NASA Astrophysics Data System (ADS)
Mons, Vincent; Wang, Qi; Zaki, Tamer
2017-11-01
Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).
Incremental analysis of large elastic deformation of a rotating cylinder
NASA Technical Reports Server (NTRS)
Buchanan, G. R.
1976-01-01
The effect of finite deformation upon a rotating, orthotropic cylinder was investigated using a general incremental theory. The incremental equations of motion are developed using the variational principle. The governing equations are derived using the principle of virtual work for a body with initial stress. The governing equations are reduced to those for the title problem and a numerical solution is obtained using finite difference approximations. Since the problem is defined in terms of one independent space coordinate, the finite difference grid can be modified as the incremental deformation occurs without serious numerical difficulties. The nonlinear problem is solved incrementally by totaling a series of linear solutions.
ERIC Educational Resources Information Center
Earl, Boyd L.
2008-01-01
A general result for the integrals of the Gaussian function over the harmonic oscillator wavefunctions is derived using generating functions. Using this result, an example problem of a harmonic oscillator with various Gaussian perturbations is explored in order to compare the results of precise numerical solution, the variational method, and…
Psychological Balance in High Level Athletes: Gender-Based Differences and Sport-Specific Patterns
Schaal, Karine; Tafflet, Muriel; Nassif, Hala; Thibault, Valérie; Pichard, Capucine; Alcotte, Mathieu; Guillet, Thibaut; El Helou, Nour; Berthelot, Geoffroy; Simon, Serge; Toussaint, Jean-François
2011-01-01
Objectives Few epidemiological studies have focused on the psychological health of high level athletes. This study aimed to identify the principal psychological problems encountered within French high level athletes, and the variations in their prevalence based on sex and the sport practiced. Methods Multivariate analyses were conducted on nationwide data obtained from the athletes' yearly psychological evaluations. Results A representative sample of 13% of the French athlete population was obtained. 17% of athletes have at least one ongoing or recent disorder, generalized anxiety disorder (GAD) being the most prevalent (6%), followed by non-specific eating disorders (4.2%). Overall, 20.2% of women had at least one psychopathology, against 15.1% in men. This female predominance applied to anxiety and eating disorders, depression, sleep problems and self-harming behaviors. The highest rates of GAD appeared in aesthetic sports (16.7% vs. 6.8% in other sports for men and 38.9% vs. 10.3% for women); the lowest prevalence was found in high risk sports athletes (3.0% vs. 3.5%). Eating disorders are most common among women in racing sports (14% vs. 9%), but for men were found mostly in combat sports (7% vs. 4.8%). Discussion This study highlights important differences in psychopathology between male and female athletes, demonstrating that the many sex-based differences reported in the general population apply to elite athletes. While the prevalence of psychological problems is no higher than in the general population, the variations in psychopathology in different sports suggest that specific constraints could influence the development of some disorders. PMID:21573222
Psychological balance in high level athletes: gender-based differences and sport-specific patterns.
Schaal, Karine; Tafflet, Muriel; Nassif, Hala; Thibault, Valérie; Pichard, Capucine; Alcotte, Mathieu; Guillet, Thibaut; El Helou, Nour; Berthelot, Geoffroy; Simon, Serge; Toussaint, Jean-François
2011-05-04
Few epidemiological studies have focused on the psychological health of high level athletes. This study aimed to identify the principal psychological problems encountered within French high level athletes, and the variations in their prevalence based on sex and the sport practiced. Multivariate analyses were conducted on nationwide data obtained from the athletes' yearly psychological evaluations. A representative sample of 13% of the French athlete population was obtained. 17% of athletes have at least one ongoing or recent disorder, generalized anxiety disorder (GAD) being the most prevalent (6%), followed by non-specific eating disorders (4.2%). Overall, 20.2% of women had at least one psychopathology, against 15.1% in men. This female predominance applied to anxiety and eating disorders, depression, sleep problems and self-harming behaviors. The highest rates of GAD appeared in aesthetic sports (16.7% vs. 6.8% in other sports for men and 38.9% vs. 10.3% for women); the lowest prevalence was found in high risk sports athletes (3.0% vs. 3.5%). Eating disorders are most common among women in racing sports (14% vs. 9%), but for men were found mostly in combat sports (7% vs. 4.8%). This study highlights important differences in psychopathology between male and female athletes, demonstrating that the many sex-based differences reported in the general population apply to elite athletes. While the prevalence of psychological problems is no higher than in the general population, the variations in psychopathology in different sports suggest that specific constraints could influence the development of some disorders.
Development of a Test Facility for Air Revitalization Technology Evaluation
NASA Technical Reports Server (NTRS)
Lu, Sao-Dung; Lin, Amy; Campbell, Melissa; Smith, Frederick
2006-01-01
An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.
Gain-Scheduled Fault Tolerance Control Under False Identification
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine (Technical Monitor)
2006-01-01
An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.
Generalized self-adjustment method for statistical mechanics of composite materials
NASA Astrophysics Data System (ADS)
Pan'kov, A. A.
1997-03-01
A new method is developed for the statistical mechanics of composite materials — the generalized selfadjustment method — which makes it possible to reduce the problem of predicting effective elastic properties of composites with random structures to the solution of two simpler "averaged" problems of an inclusion with transitional layers in a medium with the desired effective elastic properties. The inhomogeneous elastic properties and dimensions of the transitional layers take into account both the "approximate" order of mutual positioning, and also the variation in the dimensions and elastics properties of inclusions through appropriate special averaged indicator functions of the random structure of the composite. A numerical calculation of averaged indicator functions and effective elastic characteristics is performed by the generalized self-adjustment method for a unidirectional fiberglass on the basis of various models of actual random structures in the plane of isotropy.
NASA Astrophysics Data System (ADS)
Pan'kov, A. A.
1997-05-01
The feasibility of using a generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures has been examined. Using this method, the problem is reduced to solution of simpler special averaged problems for composites with single inclusions and corresponding transition layers in the medium examined. The dimensions of the transition layers are defined by correlation radii of the composite random structure of the composite, while the heterogeneous elastic properties of the transition layers take account of the probabilities for variation of the size and configuration of the inclusions using averaged special indicator functions. Results are given for a numerical calculation of the averaged indicator functions and analysis of the effect of the micropores in the matrix-fiber interface region on the effective elastic properties of unidirectional fiberglass—epoxy using the generalized self-consistent method and compared with experimental data and reported solutions.
Partitioning technique for open systems
NASA Astrophysics Data System (ADS)
Brändas, Erkki J.
2010-11-01
The focus of the present contribution is essentially confined to three research areas carried out during the author's turns as visiting (assistant, associate and full) professor at the University of Florida's Quantum Theory Project, QTP. The first two topics relate to perturbation theory and spectral theory for self-adjoint operators in Hilbert space. The third subject concerns analytic extensions to non-self-adjoint problems, where particular consequences of the occurrence of continuous energy spectra are measured. In these studies general partitioning methods serve as general cover for perturbation-, variational- and general matrix theory. In addition we follow up associated inferences for the time dependent problem as well as recent results and conclusions of a rather general yet surprising character. Although the author spent most of his times at QTP during visits in the 1970s and 1980s, collaborations with department members and shorter stays continued through later decades. Nevertheless the impact must be somewhat fragmentary, yet it is hoped that the present account is sufficiently self-contained to be realistic and constructive.
Improving the quality of mass produced maps
Simley, J.
2001-01-01
Quality is critical in cartography because key decisions are often made based on the information the map communicates. The mass production of digital cartographic information to support geographic information science has now added a new dimension to the problem of cartographic quality, as problems once limited to small volumes can now proliferate in mass production programs. These problems can also affect the economics of map production by diverting a sizeable portion of production cost to pay for rework on maps with poor quality. Such problems are common to general industry-in response, the quality engineering profession has developed a number of successful methods to overcome these problems. Two important methods are the reduction of error through statistical analysis and addressing the quality environment in which people work. Once initial and obvious quality problems have been solved, outside influences periodically appear that cause adverse variations in quality and consequently increase production costs. Such errors can be difficult to detect before the customer is affected. However, a number of statistical techniques can be employed to detect variation so that the problem is eliminated before significant damage is caused. Additionally, the environment in which the workforce operates must be conductive to quality. Managers have a powerful responsibility to create this environment. Two sets of guidelines, known as Deming's Fourteen Points and ISO-9000, provide models for this environment.
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Existence of weak solutions to degenerate p-Laplacian equations and integral formulas
NASA Astrophysics Data System (ADS)
Chua, Seng-Kee; Wheeden, Richard L.
2017-12-01
We study the problem of solving some general integral formulas and then apply the conclusions to obtain results about the existence of weak solutions of various degenerate p-Laplacian equations. We adapt Variational Calculus methods and the Mountain Pass Lemma without the Palais-Smale condition, and we use an abstract version of Lions' Concentration Compactness Principle II.
On the origin of fusiform rust resistance in loblolly pine
R.C. Schmidtling; C.D. Nelson; T.L. Kubisiak
2005-01-01
Studies of geographic variation in loblolly pine have shown that seed sources from the western (generally west of the Mississippi River) and the northeastern part of the natural distribution are relatively resistant to fusiform rust disease, while those from elsewhere are more susceptible. The greatest problem with rust infection, on the other hand, is in the center of...
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
A General Accelerated Degradation Model Based on the Wiener Process
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-01-01
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107
Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan
Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less
NASA Technical Reports Server (NTRS)
Rankin, C. C.
1988-01-01
A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.
Coupled Structural, Thermal, Phase-Change and Electromagnetic Analysis for Superconductors. Volume 1
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.
1996-01-01
Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromagnetic subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase-change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermal and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. This volume, Volume 1, describes mostly formulations for specific problems. Volume 2 describes generalization of those formulations.
Symmetric Trajectories for the 2N-Body Problem with Equal Masses
NASA Astrophysics Data System (ADS)
Terracini, Susanna; Venturelli, Andrea
2007-06-01
We consider the problem of 2 N bodies of equal masses in mathbb{R}^3 for the Newtonian-like weak-force potential r -σ, and we prove the existence of a family of collision-free nonplanar and nonhomographic symmetric solutions that are periodic modulo rotations. In addition, the rotation number with respect to the vertical axis ranges in a suitable interval. These solutions have the hip-hop symmetry, a generalization of that introduced in [19], for the case of many bodies and taking account of a topological constraint. The argument exploits the variational structure of the problem, and is based on the minimization of Lagrangian action on a given class of paths.
Cope, Anwen L; Wood, Fiona; Francis, Nick A; Chestnutt, Ivor G
2015-01-01
Objectives This study aimed to produce an account of the attitudes of general practitioners (GPs) towards the management of dental conditions in general practice, and sought to explore how GPs use antibiotics in the treatment of dental problems. Design Qualitative study employing semistructured telephone interviews and thematic analysis. Participants 17 purposively sampled GPs working in Wales, of which 9 were male. The median number of years since graduation was 21. Maximum variation sampling techniques were used to ensure participants represented different Rural–Urban localities, worked in communities with varying levels of deprivation, and had differing lengths of practising career. Results Most GPs reported regularly managing dental problems, with more socioeconomically deprived patients being particularly prone to consult. Participants recognised that dental problems are not optimally managed in general practice, but had sympathy with patients experiencing dental pain who reported difficulty obtaining an emergency dental consultation. Many GPs considered antibiotics an acceptable first-line treatment for acute dental problems and reported that patients often attended expecting to receive antibiotics. GPs who reported that their usual practice was to prescribe antibiotics were more likely to prioritise patients’ immediate needs, whereas clinicians who reported rarely prescribing often did so to encourage patients to consult a dental professional. Conclusions The presentation of patients with dental problems presents challenges to GPs who report concerns about their ability to manage such conditions. Despite this, many reported frequently prescribing antibiotics for patients with dental conditions. This may contribute to both patient morbidity and the emergence of antimicrobial resistance. This research has identified the need for quantitative data on general practice consultations for dental problems and qualitative research exploring patient perspectives on reasons for consulting. The findings of these studies will inform the design of an intervention to support patients in accessing appropriate care when experiencing dental problems. PMID:26428331
Thermodynamic limit of random partitions and dispersionless Toda hierarchy
NASA Astrophysics Data System (ADS)
Takasaki, Kanehisa; Nakatsu, Toshio
2012-01-01
We study the thermodynamic limit of random partition models for the instanton sum of 4D and 5D supersymmetric U(1) gauge theories deformed by some physical observables. The physical observables correspond to external potentials in the statistical model. The partition function is reformulated in terms of the density function of Maya diagrams. The thermodynamic limit is governed by a limit shape of Young diagrams associated with dominant terms in the partition function. The limit shape is characterized by a variational problem, which is further converted to a scalar-valued Riemann-Hilbert problem. This Riemann-Hilbert problem is solved with the aid of a complex curve, which may be thought of as the Seiberg-Witten curve of the deformed U(1) gauge theory. This solution of the Riemann-Hilbert problem is identified with a special solution of the dispersionless Toda hierarchy that satisfies a pair of generalized string equations. The generalized string equations for the 5D gauge theory are shown to be related to hidden symmetries of the statistical model. The prepotential and the Seiberg-Witten differential are also considered.
Cointegration and why it works for SHM
NASA Astrophysics Data System (ADS)
Cross, Elizabeth J.; Worden, Keith
2012-08-01
One of the most fundamental problems in Structural Health Monitoring (SHM) is that of projecting out operational and environmental variations from measured feature data. The reason for this is that algorithms used for SHM to detect changes in structural condition should not raise alarms if the structure of interest changes because of benign operational or environmental variations. This is sometimes called the data normalisation problem. Many solutions to this problem have been proposed over the years, but a new approach that uses cointegration, a concept from the field of econometrics, appears to provide a very promising solution. The theory of cointegration is mathematically complex and its use is based on the holding of a number of assumptions on the time series to which it is applied. An interesting observation that has emerged from its applications to SHM data is that the approach works very well even though the aforementioned assumptions do not hold in general. The objective of the current paper is to discuss how the cointegration assumptions break down individually in the context of SHM and to explain why this does not invalidate the application of the algorithm.
ERIC Educational Resources Information Center
Sun, Xuhua
2011-01-01
This article deals with the roles of variation problems ("one problem multiple solution" and "one problem multiple changes") as used in Chinese textbooks. It is argued that variation problems as an "indigenous" Chinese practice aim to discern and to compare the invariant feature of the relationship among concepts and…
Markham, Francis; Young, Martin; Doran, Bruce; Sugden, Mark
2017-05-23
Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM) and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs) and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by the models (I 2 ≥ 0.97; R 2 ≤ 0.01). The present study adds to the weight of evidence that EGM losses are associated with the prevalence of problem gambling. No patterns were evident among moderate-risk problem gambling prevalence estimates, suggesting that this measure is either subject to pronounced measurement error or lacks construct validity. The high degree of residual heterogeneity raises questions about the validity of comparing problem gambling prevalence estimates, even after adjusting for methodological variations between studies.
Problem-Based Learning: Instructor Characteristics, Competencies, and Professional Development
2011-01-01
cognitive learning objectives addressed by student -centered instruction . For instance, experiential learning , a variation of which is used at the...based learning in grade school science or mathematics . However, the measures could be modified to focus on adult PBL (or student -centered learning ... student -centered learning methods, the findings should generalize across instructional methods of interest to the Army. Further research is required
Ground-water quality in selected areas of Wisconsin
Hindall, S.M.
1979-01-01
Analysis of 2,071 ground-water samples from 970 wells throughout Wisconsin indicate large variations in ground-water quality. Ground water in Wisconsin is generally suitable for most uses, but in some areas concentrations of chemical constituents exceed recommended drinking-water standards. Iron, manganese, and nitrate commonly exceed recommended drinking-water standards and dissolved solids, sulfate, heavy metals, and phenolic materials may present local problems. (USGS)
NASA Technical Reports Server (NTRS)
Achtemeier, G. L.
1986-01-01
Since late 1982 NASA has supported research to develop a numerical variational model for the diagnostic assimilation of conventional and space-based meteorological data. In order to analyze the model components, four variational models are defined dividing the problem naturally according to increasing complexity. The first of these variational models (MODEL I), the subject of this report, contains the two nonlinear horizontal momentum equations, the integrated continuity equation, and the hydrostatic equation. This report summarizes the results of research (1) to improve the way the large nonmeteorological parts of the pressure gradient force are partitioned between the two terms of the pressure gradient force terms of the horizontal momentum equations, (2) to generalize the integrated continuity equation to account for variable pressure thickness over elevated terrain, and (3) to introduce horizontal variation in the precision modulus weights for the observations.
Completed Beltrami-Michell formulation for analyzing mixed boundary value problems in elasticity
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Kaljevic, Igor; Hopkins, Dale A.; Saigal, Sunil
1995-01-01
In elasticity, the method of forces, wherein stress parameters are considered as the primary unknowns, is known as the Beltrami-Michell formulation (BMF). The existing BMF can only solve stress boundary value problems; it cannot handle the more prevalent displacement of mixed boundary value problems of elasticity. Therefore, this formulation, which has restricted application, could not become a true alternative to the Navier's displacement method, which can solve all three types of boundary value problems. The restrictions in the BMF have been alleviated by augmenting the classical formulation with a novel set of conditions identified as the boundary compatibility conditions. This new method, which completes the classical force formulation, has been termed the completed Beltrami-Michell formulation (CBMF). The CBMF can solve general elasticity problems with stress, displacement, and mixed boundary conditions in terms of stresses as the primary unknowns. The CBMF is derived from the stationary condition of the variational functional of the integrated force method. In the CBMF, stresses for kinematically stable structures can be obtained without any reference to the displacements either in the field or on the boundary. This paper presents the CBMF and its derivation from the variational functional of the integrated force method. Several examples are presented to demonstrate the applicability of the completed formulation for analyzing mixed boundary value problems under thermomechanical loads. Selected example problems include a cylindrical shell wherein membrane and bending responses are coupled, and a composite circular plate.
Total Mean Curvature, Scalar Curvature, and a Variational Analog of Brown-York Mass
NASA Astrophysics Data System (ADS)
Mantoulidis, Christos; Miao, Pengzi
2017-06-01
We study the supremum of the total mean curvature on the boundary of compact, mean-convex 3-manifolds with nonnegative scalar curvature, and a prescribed boundary metric. We establish an additivity property for this supremum and exhibit rigidity for maximizers assuming the supremum is attained. When the boundary consists of 2-spheres, we demonstrate that the finiteness of the supremum follows from the previous work of Shi-Tam and Wang-Yau on the quasi-local mass problem in general relativity. In turn, we define a variational analog of Brown-York quasi-local mass without assuming that the boundary 2-sphere has positive Gauss curvature.
Computational Relativistic Astrophysics Using the Flow Field-Dependent Variation Theory
NASA Technical Reports Server (NTRS)
Richardson, G. A.; Chung, T. J.
2002-01-01
We present our method for solving general relativistic nonideal hydrodynamics. Relativistic effects become pronounced in such cases as jet formation from black hole magnetized accretion disks which may lead to the study of gamma-ray bursts. Nonideal flows are present where radiation, magnetic forces, viscosities, and turbulence play an important role. Our concern in this paper is to reexamine existing numerical simulation tools as to the accuracy and efficiency of computations and introduce a new approach known as the flow field-dependent variation (FDV) method. The main feature of the FDV method consists of accommodating discontinuities of shock waves and high gradients of flow variables such as occur in turbulence and unstable motions. In this paper, the physics involved in the solution of relativistic hydrodynamics and solution strategies of the FDV theory are elaborated. The general relativistic astrophysical flow and shock solver (GRAFSS) is introduced, and some simple example problems for computational relativistic astrophysics (CRA) are demonstrated.
Pain in general practice. Pain as a cause of patient-doctor contact.
Frølund, F; Frølund, C
1986-05-01
In 1983 26 general practitioners in a Danish provincial town made a week's survey of pain as the main cause of patient-doctor contact during the day time. The population served was 45 000-50 000 persons of all ages. Coexistent pain which was not the cause of actual contact was not recorded. Out of 2 886 contacts of all causes 641 were due to pain (22% or 222/1 000 contacts). Percentages for acute and chronic pain were 61 and 39 respectively. The commonest causes of pain were musculo-skeletal (50%), visceral including cardio-vascular (20%), infectious (15%), and headaches (8%). The overall female: male ratio was 1.5: 1, but with considerable variation within the different pain categories. The ratios for acute and chronic pain were 1.4: 1 and 1.8: 1 respectively. About one hundred contacts were recorded as "problem cases" whose predominant complaints were low back pain, headaches, and visceral pain. Pain--especially chronic pain with a non-malignant cause--is a major problem in general practice. Essentially, pain is a primary health care problem and research in this field should be encouraged.
How to Say How Much: Amounts and Stoichiometry
NASA Astrophysics Data System (ADS)
Ault, Addison
2001-10-01
This paper presents a concise and consistent pictorial representation of the ways by which chemists describe an amount of material and of the conversion factors by which these statements of amount can be translated into one another. The expressions of amounts are mole, grams, milliliters of a pure liquid, liters of solution, liters of a gas at standard and nonstandard conditions, and number of particles. The paper then presents a visual representation or "map" for the solution of the typical stoichiometry problems discussed in general chemistry. You use the map for mole-to-mole and gram-to-gram calculations (or any combination of these), and for limiting reagent and percent yield problems. You can extend the method to reactions that involve solutions or gases and to titration problems. All stoichiometry problems are presented as variations on a central theme, and all problems are reduced to the same types of elementary steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
Conic section function neural network circuitry for offline signature recognition.
Erkmen, Burcu; Kahraman, Nihan; Vural, Revna A; Yildirim, Tulay
2010-04-01
In this brief, conic section function neural network (CSFNN) circuitry was designed for offline signature recognition. CSFNN is a unified framework for multilayer perceptron (MLP) and radial basis function (RBF) networks to make simultaneous use of advantages of both. The CSFNN circuitry architecture was developed using a mixed mode circuit implementation. The designed circuit system is problem independent. Hence, the general purpose neural network circuit system could be applied to various pattern recognition problems with different network sizes on condition with the maximum network size of 16-16-8. In this brief, CSFNN circuitry system has been applied to two different signature recognition problems. CSFNN circuitry was trained with chip-in-the-loop learning technique in order to compensate typical analog process variations. CSFNN hardware achieved highly comparable computational performances with CSFNN software for nonlinear signature recognition problems.
The Green’s functions for peridynamic non-local diffusion
Wang, L. J.; Xu, J. F.
2016-01-01
In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658
Nonlinear stability of Gardner breathers
NASA Astrophysics Data System (ADS)
Alejo, Miguel A.
2018-01-01
We show that breather solutions of the Gardner equation, a natural generalization of the KdV and mKdV equations, are H2 (R) stable. Through a variational approach, we characterize Gardner breathers as minimizers of a new Lyapunov functional and we study the associated spectral problem, through (i) the analysis of the spectrum of explicit linear systems (spectral stability), and (ii) controlling degenerated directions by using low regularity conservation laws.
NASA Astrophysics Data System (ADS)
Hernandez, Monica
2017-12-01
This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.
General method of solving the Schroedinger equation of atoms and molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakatsuji, Hiroshi
2005-12-15
We propose a general method of solving the Schroedinger equation of atoms and molecules. We first construct the wave function having the exact structure, using the ICI (iterative configuration or complement interaction) method and then optimize the variables involved by the variational principle. Based on the scaled Schroedinger equation and related principles, we can avoid the singularity problem of atoms and molecules and formulate a general method of calculating the exact wave functions in an analytical expansion form. We choose initial function {psi}{sub 0} and scaling g function, and then the ICI method automatically generates the wave function that hasmore » the exact structure by using the Hamiltonian of the system. The Hamiltonian contains all the information of the system. The free ICI method provides a flexible and variationally favorable procedure of constructing the exact wave function. We explain the computational procedure of the analytical ICI method routinely performed in our laboratory. Simple examples are given using hydrogen atom for the nuclear singularity case, the Hooke's atom for the electron singularity case, and the helium atom for both cases.« less
Towards Structural Analysis of Audio Recordings in the Presence of Musical Variations
NASA Astrophysics Data System (ADS)
Müller, Meinard; Kurth, Frank
2006-12-01
One major goal of structural analysis of an audio recording is to automatically extract the repetitive structure or, more generally, the musical form of the underlying piece of music. Recent approaches to this problem work well for music, where the repetitions largely agree with respect to instrumentation and tempo, as is typically the case for popular music. For other classes of music such as Western classical music, however, musically similar audio segments may exhibit significant variations in parameters such as dynamics, timbre, execution of note groups, modulation, articulation, and tempo progression. In this paper, we propose a robust and efficient algorithm for audio structure analysis, which allows to identify musically similar segments even in the presence of large variations in these parameters. To account for such variations, our main idea is to incorporate invariance at various levels simultaneously: we design a new type of statistical features to absorb microvariations, introduce an enhanced local distance measure to account for local variations, and describe a new strategy for structure extraction that can cope with the global variations. Our experimental results with classical and popular music show that our algorithm performs successfully even in the presence of significant musical variations.
Density of convex intersections and applications
Rautenberg, C. N.; Rösel, S.
2017-01-01
In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301
Hybrid state vector methods for structural dynamic and aeroelastic boundary value problems
NASA Technical Reports Server (NTRS)
Lehman, L. L.
1982-01-01
A computational technique is developed that is suitable for performing preliminary design aeroelastic and structural dynamic analyses of large aspect ratio lifting surfaces. The method proves to be quite general and can be adapted to solving various two point boundary value problems. The solution method, which is applicable to both fixed and rotating wing configurations, is based upon a formulation of the structural equilibrium equations in terms of a hybrid state vector containing generalized force and displacement variables. A mixed variational formulation is presented that conveniently yields a useful form for these state vector differential equations. Solutions to these equations are obtained by employing an integrating matrix method. The application of an integrating matrix provides a discretization of the differential equations that only requires solutions of standard linear matrix systems. It is demonstrated that matrix partitioning can be used to reduce the order of the required solutions. Results are presented for several example problems in structural dynamics and aeroelasticity to verify the technique and to demonstrate its use. These problems examine various types of loading and boundary conditions and include aeroelastic analyses of lifting surfaces constructed from anisotropic composite materials.
A new weak Galerkin finite element method for elliptic interface problems
Mu, Lin; Wang, Junping; Ye, Xiu; ...
2016-08-26
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
A new weak Galerkin finite element method for elliptic interface problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
State-constrained booster trajectory solutions via finite elements and shooting
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.; Seywald, Hans
1993-01-01
This paper presents an extension of a FEM formulation based on variational principles. A general formulation for handling internal boundary conditions and discontinuities in the state equations is presented, and the general formulation is modified for optimal control problems subject to state-variable inequality constraints. Solutions which only touch the state constraint and solutions which have a boundary arc of finite length are considered. Suitable shape and test functions are chosen for a FEM discretization. All element quadrature (equivalent to one-point Gaussian quadrature over each element) may be done in closed form. The final form of the algebraic equations is then derived. A simple state-constrained problem is solved. Then, for a practical application of the use of the FEM formulation, a launch vehicle subject to a dynamic pressure constraint (a first-order state inequality constraint) is solved. The results presented for the launch-vehicle trajectory have some interesting features, including a touch-point solution.
Convergence of the Graph Allen-Cahn Scheme
NASA Astrophysics Data System (ADS)
Luo, Xiyang; Bertozzi, Andrea L.
2017-05-01
The graph Laplacian and the graph cut problem are closely related to Markov random fields, and have many applications in clustering and image segmentation. The diffuse interface model is widely used for modeling in material science, and can also be used as a proxy to total variation minimization. In Bertozzi and Flenner (Multiscale Model Simul 10(3):1090-1118, 2012), an algorithm was developed to generalize the diffuse interface model to graphs to solve the graph cut problem. This work analyzes the conditions for the graph diffuse interface algorithm to converge. Using techniques from numerical PDE and convex optimization, monotonicity in function value and convergence under an a posteriori condition are shown for a class of schemes under a graph-independent stepsize condition. We also generalize our results to incorporate spectral truncation, a common technique used to save computation cost, and also to the case of multiclass classification. Various numerical experiments are done to compare theoretical results with practical performance.
A general system for automatic biomedical image segmentation using intensity neighborhoods.
Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K
2011-01-01
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.
On electromagnetic forming processes in finitely strained solids: Theory and examples
NASA Astrophysics Data System (ADS)
Thomas, J. D.; Triantafyllidis, N.
2009-08-01
The process of electromagnetic forming (EMF) is a high velocity manufacturing technique that uses electromagnetic (Lorentz) body forces to shape sheet metal parts. EMF holds several advantages over conventional forming techniques: speed, repeatability, one-sided tooling, and most importantly considerable ductility increase in several metals. Current modeling techniques for EMF processes are not based on coupled variational principles to simultaneously account for electromagnetic and mechanical effects. Typically, separate solutions to the electromagnetic (Maxwell) and motion (Newton) equations are combined in staggered or lock-step methods, sequentially solving the mechanical and electromagnetic problems. The present work addresses these issues by introducing a fully coupled Lagrangian (reference configuration) least-action variational principle, involving magnetic flux and electric potentials and the displacement field as independent variables. The corresponding Euler-Lagrange equations are Maxwell's and Newton's equations in the reference configuration, which are shown to coincide with their current configuration counterparts obtained independently by a direct approach. The general theory is subsequently simplified for EMF processes by considering the eddy current approximation. Next, an application is presented for axisymmetric EMF problems. It is shown that the proposed variational principle forms the basis of a variational integration numerical scheme that provides an efficient staggered solution algorithm. As an illustration a number of such processes are simulated, inspired by recent experiments of freely expanding uncoated and polyurea-coated aluminum tubes.
NASA Astrophysics Data System (ADS)
Chernyak, Vladimir Y.; Chertkov, Michael; Bierkens, Joris; Kappen, Hilbert J.
2014-01-01
In stochastic optimal control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.
B2 and G2 Toda systems on compact surfaces: A variational approach
NASA Astrophysics Data System (ADS)
Battaglia, Luca
2017-01-01
We consider the B2 and G2 Toda systems on a compact surface (Σ, g), namely, systems of two Liouville-type PDEs coupled with a matrix of coefficients A = ( a i j ) = 2 - 1 - 2 2 ) or (2 - 1 - 3 2) . We attack the problem using variational techniques, following the previous work [Battaglia, L. et al., Adv. Math. 285, 937-979 (2015)] concerning the A2 Toda system, namely, the case A = 2 - 1 - 1 2 ) . We get the existence and multiplicity of solutions as long as χ(Σ) ≤ 0 and a generic choice of the parameters. We also extend some of the results to the case of general systems.
Comparison between IRI-2012 and GPS-TEC observations over the western Black Sea
NASA Astrophysics Data System (ADS)
Inyurt, Samed; Yildirim, Omer; Mekik, Cetin
2017-07-01
The ionosphere is a dynamic layer which generally changes according to radiation emitted by the sun, the movement of the earth around the sun, and sunspot activity. Variations can generally be categorized as regular or irregular variations. Both types of variation have a huge effect on radio wave propagation. In this study, we have focused on the seasonal variation effect, which is one of the regular forms of variation in terms of the ionosphere. We examined the seasonal variation over the ZONG station in Turkey for the year 2014. Our analysis results and IRI-2012 present different ideas about ionospheric activity. According to our analysed results, the standard deviation reached a maximum value in April 2014. However, the maximum standard deviation obtained from IRI-2012 was seen in February 2014. Furthermore, it is clear that IRI-2012 underestimated the VTEC values when compared to our results for all the months analysed. The main source of difference between the two models is the IRI-2012 topside ionospheric representation. IRI-2012 VTEC has been produced as a result of the integration of an electron density profile within altitudinal limits of 60-2000 km. In other words, the main problem with regard to the IRI-2012 VTEC representation is not being situated in the plasmaspheric part of the ionosphere. Therefore we propose that the plasmaspheric part should be taken into account to calculate the correct TEC values in mid-latitude regions, and we note that IRI-2012 does not supply precise TEC values for use in ionospheric studies.
NASA Technical Reports Server (NTRS)
Bertsimas, Dimitris; Odoni, Amedeo
1997-01-01
This document presents a critical review of the principal existing optimization models that have been applied to Air Traffic Flow Management (TFM). Emphasis will be placed on two problems, the Generalized Tactical Flow Management Problem (GTFMP) and the Ground Holding Problem (GHP), as well as on some of their variations. To perform this task, we have carried out an extensive literature review that has covered more than 40 references, most of them very recent. Based on the review of this emerging field our objectives were to: (i) identify the best available models; (ii) describe typical contexts for applications of the models; (iii) provide illustrative model formulations; and (iv) identify the methodologies that can be used to solve the models. We shall begin our presentation below by providing a brief context for the models that we are reviewing. In Section 3 we shall offer a taxonomy and identify four classes of models for review. In Sections 4, 5, and 6 we shall then review, respectively, models for the Single-Airport Ground Holding Problem, the Generalized Tactical FM P and the Multi-Airport Ground Holding Problem (for the definition of these problems see Section 3 below). In each section, we identify the best available models and discuss briefly their computational performance and applications, if any, to date. Section 7 summarizes our conclusions about the state of the art.
Geophysical Detection of Groundwater.
1982-04-01
of a coal mine fire which was at a depth of 30 ft (10 m) (Corwin and Hoover, 1979). 22 2. Telluric currents Temporal variations in the Earth’s...magnetic field generating long-period telluric currents may reach several hundred mV/mi over resistive terrain (Keller and Frischknecht, 1966). 3...Monitoring tellurics Generally, tellurics are only a problem if the read- ings of SP begin to fluctuate. If high amplitude long-period tellurics exist for
Technical description of space ultra reliable modular computer (SUMC), model 2 B
NASA Technical Reports Server (NTRS)
1975-01-01
The design features of the SUMC-2B computer, also called the IBM-HTC are described. It is general purpose digital computer implemented with flexible hardware elements and microprograming to enable low cost customizing to a wide range of applications. It executes the S/360 standard instruction set to maintain problem state compability. Memory technology, extended instruction sets, and I/O channel variations are among the available options.
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
What is integrability of discrete variational systems?
Boll, Raphael; Petrera, Matteo; Suris, Yuri B
2014-02-08
We propose a notion of a pluri-Lagrangian problem, which should be understood as an analogue of multi-dimensional consistency for variational systems. This is a development along the line of research of discrete integrable Lagrangian systems initiated in 2009 by Lobb and Nijhoff, however, having its more remote roots in the theory of pluriharmonic functions, in the Z -invariant models of statistical mechanics and their quasiclassical limit, as well as in the theory of variational symmetries going back to Noether. A d -dimensional pluri-Lagrangian problem can be described as follows: given a d -form [Formula: see text] on an m -dimensional space (called multi-time, m > d ), whose coefficients depend on a sought-after function x of m independent variables (called field), find those fields x which deliver critical points to the action functionals [Formula: see text] for any d -dimensional manifold Σ in the multi-time. We derive the main building blocks of the multi-time Euler-Lagrange equations for a discrete pluri-Lagrangian problem with d =2, the so-called corner equations, and discuss the notion of consistency of the system of corner equations. We analyse the system of corner equations for a special class of three-point two-forms, corresponding to integrable quad-equations of the ABS list. This allows us to close a conceptual gap of the work by Lobb and Nijhoff by showing that the corresponding two-forms are closed not only on solutions of (non-variational) quad-equations, but also on general solutions of the corresponding corner equations. We also find an example of a pluri-Lagrangian system not coming from a multi-dimensionally consistent system of quad-equations.
What is integrability of discrete variational systems?
Boll, Raphael; Petrera, Matteo; Suris, Yuri B.
2014-01-01
We propose a notion of a pluri-Lagrangian problem, which should be understood as an analogue of multi-dimensional consistency for variational systems. This is a development along the line of research of discrete integrable Lagrangian systems initiated in 2009 by Lobb and Nijhoff, however, having its more remote roots in the theory of pluriharmonic functions, in the Z-invariant models of statistical mechanics and their quasiclassical limit, as well as in the theory of variational symmetries going back to Noether. A d-dimensional pluri-Lagrangian problem can be described as follows: given a d-form on an m-dimensional space (called multi-time, m>d), whose coefficients depend on a sought-after function x of m independent variables (called field), find those fields x which deliver critical points to the action functionals for any d-dimensional manifold Σ in the multi-time. We derive the main building blocks of the multi-time Euler–Lagrange equations for a discrete pluri-Lagrangian problem with d=2, the so-called corner equations, and discuss the notion of consistency of the system of corner equations. We analyse the system of corner equations for a special class of three-point two-forms, corresponding to integrable quad-equations of the ABS list. This allows us to close a conceptual gap of the work by Lobb and Nijhoff by showing that the corresponding two-forms are closed not only on solutions of (non-variational) quad-equations, but also on general solutions of the corresponding corner equations. We also find an example of a pluri-Lagrangian system not coming from a multi-dimensionally consistent system of quad-equations. PMID:24511254
NASA Astrophysics Data System (ADS)
Wang, Hanxiong; Liu, Liping; Liu, Dong
2017-03-01
The equilibrium shape of a bubble/droplet in an electric field is important for electrowetting over dielectrics (EWOD), electrohydrodynamic (EHD) enhancement for heat transfer and electro-deformation of a single biological cell among others. In this work, we develop a general variational formulation in account of electro-mechanical couplings. In the context of EHD, we identify the free energy functional and the associated energy minimization problem that determines the equilibrium shape of a bubble in an electric field. Based on this variational formulation, we implement a fixed mesh level-set gradient method for computing the equilibrium shapes. This numerical scheme is efficient and validated by comparing with analytical solutions at the absence of electric field and experimental results at the presence of electric field. We also present simulation results for zero gravity which will be useful for space applications. The variational formulation and numerical scheme are anticipated to have broad applications in areas of EWOD, EHD and electro-deformation in biomechanics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr; Li, Juan, E-mail: juanli@sdu.edu.cn; Ma, Jin, E-mail: jinma@usc.edu
In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and wemore » extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.« less
NASA Astrophysics Data System (ADS)
Kim, Sungho
2017-06-01
Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.
Coupled variational formulations of linear elasticity and the DPG methodology
NASA Astrophysics Data System (ADS)
Fuentes, Federico; Keith, Brendan; Demkowicz, Leszek; Le Tallec, Patrick
2017-11-01
This article presents a general approach akin to domain-decomposition methods to solve a single linear PDE, but where each subdomain of a partitioned domain is associated to a distinct variational formulation coming from a mutually well-posed family of broken variational formulations of the original PDE. It can be exploited to solve challenging problems in a variety of physical scenarios where stability or a particular mode of convergence is desired in a part of the domain. The linear elasticity equations are solved in this work, but the approach can be applied to other equations as well. The broken variational formulations, which are essentially extensions of more standard formulations, are characterized by the presence of mesh-dependent broken test spaces and interface trial variables at the boundaries of the elements of the mesh. This allows necessary information to be naturally transmitted between adjacent subdomains, resulting in coupled variational formulations which are then proved to be globally well-posed. They are solved numerically using the DPG methodology, which is especially crafted to produce stable discretizations of broken formulations. Finally, expected convergence rates are verified in two different and illustrative examples.
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
General methodology for simultaneous representation and discrimination of multiple object classes
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem and for classification and pose estimation of two similar objects under 3D aspect angle variations.
NASA Astrophysics Data System (ADS)
Serpieri, Roberto; Travascio, Francesco
2016-03-01
In poroelasticity, the effective stress law relates the external stress applied to the medium to the macroscopic strain of the solid phase and the interstitial pressure of the fluid saturating the mixture. Such relationship has been formerly introduced by Terzaghi in form of a principle. To date, no poroelastic theory is capable of recovering a stress partitioning law in agreement with Terzaghi's postulated one in the absence of ad hoc constitutive assumptions on the medium. We recently proposed a variational macroscopic continuum description of two-phase poroelasticity to derive a general biphasic formulation at finite deformations, termed variational macroscopic theory of porous media (VMTPM). Such approach proceeds from the inclusion of the intrinsic volumetric strain among the kinematic descriptors aside to macroscopic displacements, and as a variational theory, uses the Hamilton least-action principle as the unique primitive concept of mechanics invoked to derive momentum balance equations. In a previous related work it was shown that, for the subclass of undrained problems, VMTPM predicts that stress is partitioned in the two phases in strict compliance with Terzaghi's law, irrespective of the microstructural and constitutive features of a given medium. In the present contribution, we further develop the linearized framework of VMTPM to arrive at a general operative formula that allows the quantitative determination of stress partitioning in a jacketed test over a generic isotropic biphasic specimen. This formula is quantitative and general, in that it relates the partial phase stresses to the externally applied stress as function of partitioning coefficients that are all derived by strictly following a purely variational and purely macroscopic approach, and in the absence of any specific hypothesis on the microstructural or constitutive features of a given medium. To achieve this result, the stiffness coefficients of the theory are derived by using exclusively variational arguments. We derive the boundary conditions attained across the boundary of a poroelastic saturated medium in contact with an impermeable surface also based on purely variational arguments. A technique to retrieve bounds for the resulting elastic moduli, based on Hashin's composite spheres assemblage method, is also reported. Notably, in spite of the minimal mechanical hypotheses introduced, a rich mechanical behavior is observed.
Locatelli, Fernando F; Fernandez, Patricia C; Smith, Brian H
2016-09-01
Natural odors are typically mixtures of several chemical components. Mixtures vary in composition among odor objects that have the same meaning. Therefore a central 'categorization' problem for an animal as it makes decisions about odors in natural contexts is to correctly identify odor variants that have the same meaning and avoid variants that have a different meaning. We propose that identified mechanisms of associative and non-associative plasticity in early sensory processing in the insect antennal lobe and mammalian olfactory bulb are central to solving this problem. Accordingly, this plasticity should work to improve categorization of odors that have the opposite meanings in relation to important events. Using synthetic mixtures designed to mimic natural odor variation among flowers, we studied how honey bees learn about and generalize among floral odors associated with food. We behaviorally conditioned honey bees on a difficult odor discrimination problem using synthetic mixtures that mimic natural variation among snapdragon flowers. We then used calcium imaging to measure responses of projection neurons of the antennal lobe, which is the first synaptic relay of olfactory sensory information in the brain, to study how ensembles of projection neurons change as a result of behavioral conditioning. We show how these ensembles become 'tuned' through plasticity to improve categorization of odors that have the different meanings. We argue that this tuning allows more efficient use of the immense coding space of the antennal lobe and olfactory bulb to solve the categorization problem. Our data point to the need for a better understanding of the 'statistics' of the odor space. © 2016. Published by The Company of Biologists Ltd.
The Myth of Community Differences as the Cause of Variations Among IRBs
Klitzman, Robert
2013-01-01
Background Although variations among institutional review boards (IRBs) have been documented for 30 years, they continue, raising crucial questions as to why they persist as well as how IRBs view and respond to these variations. Methods In-depth, 2-hour interviews were conducted with 46 IRB chairs, administrators, and members. The leadership of 60 U.S. IRBs were contacted (every fourth one in the list of the top 240 institutions by NIH funding). IRB leaders from 34 of these institutions were interviewed (response rate = 55%). Results The interviewees suggest that differences often persist because IRBs think these are legitimate, and regulations permit variations due to differing “community values.” Yet, these variations frequently appear to stem more from differences in institutional and subjective personality factors, and from “more eyes” examining protocols, trying to foresee all potential future logistical problems, than from the values of the communities from which research participants are drawn. However, IRBs generally appear to defend these variations as reflecting underlying differences in community norms. Conclusions These data pose critical questions for policy and practice. Attitudinal changes and education among IRBs, principal investigators (PIs), policymakers, and others and research concerning these issues are needed. PMID:25285236
Decision tree rating scales for workload estimation: Theme and variations
NASA Technical Reports Server (NTRS)
Wierwille, W. W.; Skipper, J. H.; Rieger, C. A.
1984-01-01
The Modified Cooper-Harper (MCH) scale which is a sensitive indicator of workload in several different types of aircrew tasks was examined. The study determined if variations of the scale might provide greater sensitivity and the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were examined in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. It is indicated that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent senstivity and remains the scale recommended for general use. The MCH scale results are consistent with earlier experiments. The rating scale experiments are reported and the questionnaire results which were directed to obtain a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.
Decision Tree Rating Scales for Workload Estimation: Theme and Variations
NASA Technical Reports Server (NTRS)
Wietwille, W. W.; Skipper, J. H.; Rieger, C. A.
1984-01-01
The modified Cooper-Harper (MCH) scale has been shown to be a sensitive indicator of workload in several different types of aircrew tasks. The MCH scale was examined to determine if certain variations of the scale might provide even greater sensitivity and to determine the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were studied in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. Results indicate that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent sensitivity and remains the scale recommended for general use. The results of the rating scale experiments are presented and the questionnaire results which were directed at obtaining a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.
Water as an urban resource and nuisance
Thomas, H.E.; Schneider, William Joseph
1970-01-01
The water resource, which is widely and irregularly distributed on earth, is available to man for such enjoyment and development and use as he sees fit, some use being essential to his existence. Natural variations in the quantity and quality of water are inevitable and, if they cause annoyance or injury to someone, are accepted as one of the hardships that this planet imposes upon its inhabitants; such variations are recognized as "acts of God." However, if any man or society is partly responsible for these variations, which may cause such annoyance or injury, and may become a nuisance (an invasion or disturbance of the rights of others) such a man or society may perhaps be subject to injunctions and damage suits. Legal disputes over water as a nuisance are generally deeply involved with problems of the respective rights of plaintiff and defendant. These respective rights vary among the States.
A substructure coupling procedure applicable to general linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Howsman, T. G.; Craig, R. R., Jr.
1984-01-01
A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond Lee; McCaughan, Frances E.
1998-01-01
Stationary onset of convection due to surface tension variation in an unbounded multicomponent fluid layer is considered. Surface deformation is included and general flux boundary conditions are imposed on the stratifying agencies (temperature/composition) disturbance equations. Exact solutions are obtained to the general N-component problem for both finite and infinitesimal wavenumbers. Long wavelength instability may coexist with a finite wavelength instability for certain sets of parameter values, often referred to as frontier points. For an impermeable/insulated upper boundary and a permeable/conductive lower boundary, frontier boundaries are computed in the space of Bond number, Bo, versus Crispation number, Cr, over the range 5 x 10(exp -7) less than or equal to Bo less than or equal to 1. The loci of frontier points in (Bo, Cr) space for different values of N, diffusivity ratios, and, Marangoni numbers, collapsed to a single curve in (Bo, D(dimensional variable)Cr) space, where D(dimensional variable) is a Marangoni number weighted diffusivity ratio.
Krishnakumar, Ambika; Narine, Lutchmie; Soonthorndhada, Amara; Thianlai, Kanchana
2015-03-01
To examine gender variations in the linkages among family stressors, home demands and responsibilities, coping resources, social connectedness, and older adult health problems. Data were collected from 3,800 elderly participants (1,654 men and 2,146 women) residing in Kanchanaburi province, Thailand. Findings indicated gender variations in the levels of these constructs and in the mediational pathways. Thai women indicated greater health problems than men. Emotional empathy was the central variable that linked financial strain, home demands and responsibilities, and older adult health problems through social connectedness. Financial strain (and negative life events for women) was associated with lowered coping self-efficacy and increased health problems. The model indicated greater strength in predicting female health problems. Findings support gender variations in the relationships between ecological factors and older adult health problems. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Rezeau, L.; Belmont, G.; Manuzzo, R.; Aunai, N.; Dargent, J.
2018-01-01
We explore the structure of the magnetopause using a crossing observed by the Magnetospheric Multiscale (MMS) spacecraft on 16 October 2015. Several methods (minimum variance analysis, BV method, and constant velocity analysis) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical, and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyze more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new single spacecraft method, called LNA (local normal analysis) for determining the varying normal, and we compare the results so obtained with those coming from the multispacecraft minimum directional derivative (MDD) tool developed by Shi et al. (2005). This last method gives the dimensionality of the magnetic variations from multipoint measurements and also allows estimating the direction of the local normal when the variations are locally 1-D. This study shows that the magnetopause does include approximate one-dimensional substructures but also two- and three-dimensional structures. It also shows that the dimensionality of the magnetic variations can differ from the variations of other fields so that, at some places, the magnetic field can have a 1-D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. A generalization of the MDD tool is proposed.
Minimizing the Diameter of a Network Using Shortcut Edges
NASA Astrophysics Data System (ADS)
Demaine, Erik D.; Zadimoghaddam, Morteza
We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.
Solving Quantum Ground-State Problems with Nuclear Magnetic Resonance
Li, Zhaokai; Yung, Man-Hong; Chen, Hongwei; Lu, Dawei; Whitfield, James D.; Peng, Xinhua; Aspuru-Guzik, Alán; Du, Jiangfeng
2011-01-01
Quantum ground-state problems are computationally hard problems for general many-body Hamiltonians; there is no classical or quantum algorithm known to be able to solve them efficiently. Nevertheless, if a trial wavefunction approximating the ground state is available, as often happens for many problems in physics and chemistry, a quantum computer could employ this trial wavefunction to project the ground state by means of the phase estimation algorithm (PEA). We performed an experimental realization of this idea by implementing a variational-wavefunction approach to solve the ground-state problem of the Heisenberg spin model with an NMR quantum simulator. Our iterative phase estimation procedure yields a high accuracy for the eigenenergies (to the 10−5 decimal digit). The ground-state fidelity was distilled to be more than 80%, and the singlet-to-triplet switching near the critical field is reliably captured. This result shows that quantum simulators can better leverage classical trial wave functions than classical computers PMID:22355607
JPL Test Effectiveness Analysis
NASA Technical Reports Server (NTRS)
Shreck, Stephanie; Sharratt, Stephen; Smith, Joseph F.; Strong, Edward
2008-01-01
1) The pilot study provided meaningful conclusions that are generally consistent with the earlier Test Effectiveness work done between 1992 and 1994: a) Analysis of pre-launch problem/failure reports is consistent with earlier work. b) Analysis of post-launch early mission anomaly reports indicates that there are more software issues in newer missions, and the no-test category for identification of post-launch failures is more significant than in the earlier analysis. 2) Future work includes understanding how differences in Missions effect these analyses: a) There are large variations in the number of problem reports and issues that are documented by the different Projects/Missions. b) Some missions do not have any reported environmental test anomalies, even though environmental tests were performed. 3) Each project/mission has different standards and conventions for filling out the PFR forms, the industry may wish to address this issue: a) Existing problem reporting forms are to document and track problems, failures, and issues (etc.) for the projects, to ensure high quality. b) Existing problem reporting forms are not intended for data mining.
On a variational approach to some parameter estimation problems
NASA Technical Reports Server (NTRS)
Banks, H. T.
1985-01-01
Examples (1-D seismic, large flexible structures, bioturbation, nonlinear population dispersal) in which a variation setting can provide a convenient framework for convergence and stability arguments in parameter estimation problems are considered. Some of these examples are 1-D seismic, large flexible structures, bioturbation, and nonlinear population dispersal. Arguments for convergence and stability via a variational approach of least squares formulations of parameter estimation problems for partial differential equations is one aspect of the problem considered.
Homogenization models for 2-D grid structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Cioranescu, D.; Rebnord, D. A.
1992-01-01
In the past several years, we have pursued efforts related to the development of accurate models for the dynamics of flexible structures made of composite materials. Rather than viewing periodicity and sparseness as obstacles to be overcome, we exploit them to our advantage. We consider a variational problem on a domain that has large, periodically distributed holes. Using homogenization techniques we show that the solution to this problem is in some topology 'close' to the solution of a similar problem that holds on a much simpler domain. We study the behavior of the solution of the variational problem as the holes increase in number, but decrease in size in such a way that the total amount of material remains constant. The result is an equation that is in general more complex, but with a domain that is simply connected rather than perforated. We study the limit of the solution as the amount of material goes to zero. This second limit will, in most cases, retrieve much of the simplicity that was lost in the first limit without sacrificing the simplicity of the domain. Finally, we show that these results can be applied to the case of a vibrating Love-Kirchhoff plate with Kelvin-Voigt damping. We rely heavily on earlier results of (Du), (CS) for the static, undamped Love-Kirchhoff equation. Our efforts here result in a modification of those results to include both time dependence and Kelvin-Voigt damping.
Isentropic fluid dynamics in a curved pipe
NASA Astrophysics Data System (ADS)
Colombo, Rinaldo M.; Holden, Helge
2016-10-01
In this paper we study isentropic flow in a curved pipe. We focus on the consequences of the geometry of the pipe on the dynamics of the flow. More precisely, we present the solution of the general Cauchy problem for isentropic fluid flow in an arbitrarily curved, piecewise smooth pipe. We consider initial data in the subsonic regime, with small total variation about a stationary solution. The proof relies on the front-tracking method and is based on [1].
Instability Paths in the Kirchhoff-Plateau Problem
NASA Astrophysics Data System (ADS)
Giusteri, Giulio G.; Franceschini, Paolo; Fried, Eliot
2016-08-01
The Kirchhoff-Plateau problem concerns the equilibrium shapes of a system in which a flexible filament in the form of a closed loop is spanned by a soap film, with the filament being modeled as a Kirchhoff rod and the action of the spanning surface being solely due to surface tension. Adopting a variational approach, we define an energy associated with shape deformations of the system and then derive general equilibrium and (linear) stability conditions by considering the first and second variations of the energy functional. We analyze in detail the transition to instability of flat circular configurations, which are ground states for the system in the absence of surface tension, when the latter is progressively increased. Such a theoretical study is particularly useful here, since the many different perturbations that can lead to instability make it challenging to perform an exhaustive experimental investigation. We generalize previous results, since we allow the filament to possess a curved intrinsic shape and also to display anisotropic flexural properties (as happens when the cross section of the filament is noncircular). This is accomplished by using a rod energy which is familiar from the modeling of DNA filaments. We find that the presence of intrinsic curvature is necessary to obtain a first buckling mode which is not purely tangent to the spanning surface. We also elucidate the role of twisting buckling modes, which become relevant in the presence of flexural anisotropy.
Probabilistic models of genetic variation in structured populations applied to global human studies.
Hao, Wei; Song, Minsun; Storey, John D
2016-03-01
Modern population genetics studies typically involve genome-wide genotyping of individuals from a diverse network of ancestries. An important problem is how to formulate and estimate probabilistic models of observed genotypes that account for complex population structure. The most prominent work on this problem has focused on estimating a model of admixture proportions of ancestral populations for each individual. Here, we instead focus on modeling variation of the genotypes without requiring a higher-level admixture interpretation. We formulate two general probabilistic models, and we propose computationally efficient algorithms to estimate them. First, we show how principal component analysis can be utilized to estimate a general model that includes the well-known Pritchard-Stephens-Donnelly admixture model as a special case. Noting some drawbacks of this approach, we introduce a new 'logistic factor analysis' framework that seeks to directly model the logit transformation of probabilities underlying observed genotypes in terms of latent variables that capture population structure. We demonstrate these advances on data from the Human Genome Diversity Panel and 1000 Genomes Project, where we are able to identify SNPs that are highly differentiated with respect to structure while making minimal modeling assumptions. A Bioconductor R package called lfa is available at http://www.bioconductor.org/packages/release/bioc/html/lfa.html jstorey@princeton.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-08-01
Documents relevant to the development and implementation of the California energy insulation standards for new residential buildings were evaluated and a survey was conducted to determine problems encountered in the implementation, enforcement, and design aspects of the standards. The impact of the standards on enforcement agencies, designers, builders and developers, manufacturers and suppliers, consumers, and the building process in general is summarized. The impact on construction costs and energy savings varies considerably because of the wide variation in prior insulation practices and climatic conditions in California. The report concludes with a series of recommendations covering all levels of government andmore » the building process. (MCW)« less
Longitudinal flying qualities criteria for single-pilot instrument flight operations
NASA Technical Reports Server (NTRS)
Stengel, R. F.; Bar-Gill, A.
1983-01-01
Modern estimation and control theory, flight testing, and statistical analysis were used to deduce flying qualities criteria for General Aviation Single Pilot Instrument Flight Rule (SPIFR) operations. The principal concern is that unsatisfactory aircraft dynamic response combined with high navigation/communication workload can produce problems of safety and efficiency. To alleviate these problems. The relative importance of these factors must be determined. This objective was achieved by flying SPIFR tasks with different aircraft dynamic configurations and assessing the effects of such variations under these conditions. The experimental results yielded quantitative indicators of pilot's performance and workload, and for each of them, multivariate regression was applied to evaluate several candidate flying qualities criteria.
NASA Astrophysics Data System (ADS)
Breus, T. K.; Binhi, V. N.; Petrukovich, A. A.
2016-05-01
The body of current heliobiological evidence suggests that very weak variable magnetic fields due to solar- and geomagnetic-activities do have a biological effect. Geomagnetic disturbances can cause a nonspecific reaction in the human body - a kind of general adaptation syndrome which occurs due to any external stress factor. Also, specific reactions can develop. One of the reasons discussed for the similarity between biological and heliogeophysical rhythms is that geomagnetic variations have a direct influence on organisms, although exact magnetoreception mechanisms are not yet clear. The paper briefly reviews the current state of empirical and theoretical work on this fundamental multidisciplinary problem.
Child behaviour problems and childhood illness: development of the Eczema Behaviour Checklist.
Mitchell, A E; Morawska, A; Fraser, J A; Sillar, K
2017-01-01
Children with atopic dermatitis are at increased risk of both general behaviour problems, and those specific to the condition and its treatment. This can hamper the ability of parents to carry out treatment and manage the condition effectively. To date, there is no published instrument available to assess child behaviour difficulties in the context of atopic dermatitis management. Our aim was to develop a reliable and valid instrument to assess atopic dermatitis-specific child behaviour problems, and parents' self-efficacy (confidence) for managing these behaviours. The Eczema Behaviour Checklist (EBC) was developed as a 25-item questionnaire to measure (i) extent of behaviour problems (EBC Extent scale), and (ii) parents' self-efficacy for managing behaviour problems (EBC Confidence scale), in the context of child atopic dermatitis management. A community-based sample of 292 parents completed the EBC, measures of general behaviour difficulties, self-efficacy with atopic dermatitis management and use of dysfunctional parenting strategies. There was satisfactory internal consistency and construct validity for EBC Extent and Confidence scales. There was a negative correlation between atopic dermatitis-specific behaviour problems and parents' self-efficacy for dealing with behaviours (r = -.53, p < .001). Factor analyses revealed a three-factor structure for both scales: (i) treatment-related behaviours; (ii) symptom-related behaviours; and (iii) behaviours related to impact of the illness. Variation in parents' self-efficacy for managing their child's atopic dermatitis was explained by intensity of illness-specific child behaviour problems and parents' self-efficacy for dealing with the behaviours. The new measure of atopic dermatitis-specific child behaviour problems was a stronger predictor of parents' self-efficacy for managing their child's condition than was the measure of general child behaviour difficulties. Results provide preliminary evidence of reliability and validity of the EBC, which has potential for use in clinical and research settings, and warrant further psychometric evaluation. © 2016 John Wiley & Sons Ltd.
An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional
NASA Astrophysics Data System (ADS)
van Gennip, Yves
2018-06-01
We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.
Simultaneous reconstruction and segmentation for dynamic SPECT imaging
NASA Astrophysics Data System (ADS)
Burger, Martin; Rossmanith, Carolin; Zhang, Xiaoqun
2016-10-01
This work deals with the reconstruction of dynamic images that incorporate characteristic dynamics in certain subregions, as arising for the kinetics of many tracers in emission tomography (SPECT, PET). We make use of a basis function approach for the unknown tracer concentration by assuming that the region of interest can be divided into subregions with spatially constant concentration curves. Applying a regularised variational framework reminiscent of the Chan-Vese model for image segmentation we simultaneously reconstruct both the labelling functions of the subregions as well as the subconcentrations within each region. Our particular focus is on applications in SPECT with the Poisson noise model, resulting in a Kullback-Leibler data fidelity in the variational approach. We present a detailed analysis of the proposed variational model and prove existence of minimisers as well as error estimates. The latter apply to a more general class of problems and generalise existing results in literature since we deal with a nonlinear forward operator and a nonquadratic data fidelity. A computational algorithm based on alternating minimisation and splitting techniques is developed for the solution of the problem and tested on appropriately designed synthetic data sets. For those we compare the results to those of standard EM reconstructions and investigate the effects of Poisson noise in the data.
On the Relationship between Variational Level Set-Based and SOM-Based Active Contours
Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad
2015-01-01
Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Putilov, Arcady A
2017-01-01
Compared to literature on seasonal variation in mood and well-being, reports on seasonality of trouble sleeping are scarce and contradictive. To extend geography of such reports on example of people naturally exposed to high-amplitude annual variation in daylength and/or temperature. Participants were the residents of Turkmenia, West Siberia, South and North Yakutia, Chukotka, and Alaska. Health and sleep-wake adaptabilities, month-to-month variation in sleeping problems, well-being and behaviors were self-assessed. More than a half of 2398 respondents acknowledged seasonality of sleeping problems. Four of the assessed sleeping problems demonstrated three different patterns of seasonal variation. Rate of the problems significantly increased in winter months with long nights and cold days (daytime sleepiness and difficulties falling and staying asleep) as well as in summer months with either long days (premature awakening and difficulties falling and staying asleep) or hot nights and days (all 4 sleeping problems). Individual differences between respondents in pattern and level of seasonality of sleeping problems were significantly associated with differences in several other domains of individual variation, such as gender, age, ethnicity, physical health, morning-evening preference, sleep quality, and adaptability of the sleep-wake cycle. These results have practical relevance to understanding of the roles playing by natural environmental factors in seasonality of sleeping problems as well as to research on prevalence of sleep disorders and methods of their prevention and treatment in regions with large seasonal differences in temperature and daylength.
On some variational acceleration techniques and related methods for local refinement
NASA Astrophysics Data System (ADS)
Teigland, Rune
1998-10-01
This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.
NASA Astrophysics Data System (ADS)
Abali, B. Emek
2018-04-01
For micro-architectured materials with a substructure, called metamaterials, we can realize a direct numerical simulation in the microscale by using classical mechanics. This method is accurate, however, computationally costly. Instead, a solution of the same problem in the macroscale is possible by means of the generalized mechanics. In this case, no detailed modeling of the substructure is necessary; however, new parameters emerge. A physical interpretation of these metamaterial parameters is challenging leading to a lack of experimental strategies for their determination. In this work, we exploit the variational formulation based on action principles and obtain a direct relation between a parameter used in the kinetic energy and a metamaterial parameter in the case of a viscoelastic model.
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
Discontinuous gradient differential equations and trajectories in the calculus of variations
NASA Astrophysics Data System (ADS)
Bogaevskii, I. A.
2006-12-01
The concept of gradient of smooth functions is generalized for their sums with concave functions. An existence, uniqueness, and continuous dependence theorem for increasing time is formulated and proved for solutions of an ordinary differential equation the right-hand side of which is the gradient of the sum of a concave and a smooth function. With the use of this result a physically natural motion of particles, well defined even at discontinuities of the velocity field, is constructed in the variational problem of the minimal mechanical action in a space of arbitrary dimension. For such a motion of particles in the plane all typical cases of the birth and the interaction of point clusters of positive mass are described.
Essays on variational approximation techniques for stochastic optimization problems
NASA Astrophysics Data System (ADS)
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.
NASA Astrophysics Data System (ADS)
Diaz, Victor Alfonzo; Giusti, Andrea
2018-03-01
The aim of this paper is to present a simple generalization of bosonic string theory in the framework of the theory of fractional variational problems. Specifically, we present a fractional extension of the Polyakov action, for which we compute the general form of the equations of motion and discuss the connection between the new fractional action and a generalization the Nambu-Goto action. Consequently, we analyze the symmetries of the modified Polyakov action and try to fix the gauge, following the classical procedures. Then we solve the equations of motion in a simplified setting. Finally, we present a Hamiltonian description of the classical fractional bosonic string and introduce the fractional light-cone gauge. It is important to remark that, throughout the whole paper, we thoroughly discuss how to recover the known results as an "integer" limit of the presented model.
Variational estimate method for solving autonomous ordinary differential equations
NASA Astrophysics Data System (ADS)
Mungkasi, Sudi
2018-04-01
In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.
Efficient hybrid-symbolic methods for quantum mechanical calculations
NASA Astrophysics Data System (ADS)
Scott, T. C.; Zhang, Wenxing
2015-06-01
We present hybrid symbolic-numerical tools to generate optimized numerical code for rapid prototyping and fast numerical computation starting from a computer algebra system (CAS) and tailored to any given quantum mechanical problem. Although a major focus concerns the quantum chemistry methods of H. Nakatsuji which has yielded successful and very accurate eigensolutions for small atoms and molecules, the tools are general and may be applied to any basis set calculation with a variational principle applied to its linear and non-linear parameters.
Optimization of coupled systems: A critical overview of approaches
NASA Technical Reports Server (NTRS)
Balling, R. J.; Sobieszczanski-Sobieski, J.
1994-01-01
A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.
The Adiabatic Piston and the Second Law of Thermodynamics
NASA Astrophysics Data System (ADS)
Crosignani, Bruno; Di Porto, Paolo; Conti, Claudio
2002-11-01
A detailed analysis of the adiabatic-piston problem reveals peculiar dynamical features that challenge the general belief that isolated systems necessarily reach a static equilibrium state. In particular, the fact that the piston behaves like a perpetuum mobile, i.e., it never stops but keeps wandering, undergoing sizable oscillations, around the position corresponding to maximum entropy, has remarkable implications on the entropy variations of the system and on the validity of the second law when dealing with systems of mesoscopic dimensions.
Finite element modelling of non-linear magnetic circuits using Cosmic NASTRAN
NASA Technical Reports Server (NTRS)
Sheerer, T. J.
1986-01-01
The general purpose Finite Element Program COSMIC NASTRAN currently has the ability to model magnetic circuits with constant permeablilities. An approach was developed which, through small modifications to the program, allows modelling of non-linear magnetic devices including soft magnetic materials, permanent magnets and coils. Use of the NASTRAN code resulted in output which can be used for subsequent mechanical analysis using a variation of the same computer model. Test problems were found to produce theoretically verifiable results.
Training trajectories by continuous recurrent multilayer networks.
Leistritz, L; Galicki, M; Witte, H; Kochs, E
2002-01-01
This paper addresses the problem of training trajectories by means of continuous recurrent neural networks whose feedforward parts are multilayer perceptrons. Such networks can approximate a general nonlinear dynamic system with arbitrary accuracy. The learning process is transformed into an optimal control framework where the weights are the controls to be determined. A training algorithm based upon a variational formulation of Pontryagin's maximum principle is proposed for such networks. Computer examples demonstrating the efficiency of the given approach are also presented.
NASA Astrophysics Data System (ADS)
Yu, Jie; Liu, Yikan; Yamamoto, Masahiro
2018-04-01
In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.
Ruíz, A; Ramos, A; San Emeterio, J L
2004-04-01
An estimation procedure to efficiently find approximate values of internal parameters in ultrasonic transducers intended for broadband operation would be a valuable tool to discover internal construction data. This information is necessary in the modelling and simulation of acoustic and electrical behaviour related to ultrasonic systems containing commercial transducers. There is not a general solution for this generic problem of parameter estimation in the case of broadband piezoelectric probes. In this paper, this general problem is briefly analysed for broadband conditions. The viability of application in this field of an artificial intelligence technique supported on the modelling of the transducer internal components is studied. A genetic algorithm (GA) procedure is presented and applied to the estimation of different parameters, related to two transducers which are working as pulsed transmitters. The efficiency of this GA technique is studied, considering the influence of the number and variation range of the estimated parameters. Estimation results are experimentally ratified.
Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models.
Daunizeau, J; Friston, K J; Kiebel, S J
2009-11-01
In this paper, we describe a general variational Bayesian approach for approximate inference on nonlinear stochastic dynamic models. This scheme extends established approximate inference on hidden-states to cover: (i) nonlinear evolution and observation functions, (ii) unknown parameters and (precision) hyperparameters and (iii) model comparison and prediction under uncertainty. Model identification or inversion entails the estimation of the marginal likelihood or evidence of a model. This difficult integration problem can be finessed by optimising a free-energy bound on the evidence using results from variational calculus. This yields a deterministic update scheme that optimises an approximation to the posterior density on the unknown model variables. We derive such a variational Bayesian scheme in the context of nonlinear stochastic dynamic hierarchical models, for both model identification and time-series prediction. The computational complexity of the scheme is comparable to that of an extended Kalman filter, which is critical when inverting high dimensional models or long time-series. Using Monte-Carlo simulations, we assess the estimation efficiency of this variational Bayesian approach using three stochastic variants of chaotic dynamic systems. We also demonstrate the model comparison capabilities of the method, its self-consistency and its predictive power.
Actual Romanian research in post-newtonian dynamics
NASA Astrophysics Data System (ADS)
Mioc, V.; Stavinschi, M.
2007-05-01
We survey the recent Romanian results in the study of the two-body problem in post-Newtonian fields. Such a field is characterized, in general, by a potential of the form U(q)=|q|^{-1}+ something (small, but not compulsorily). We distinguish some classes of post-Newtonian models: relativistic (Schwarzschild, Fock, Einstein PN, Reissner-Nordström, Schwarzschild - de Sitter, etc.) and nonrelativistic (Manev, Mücket-Treder, Seeliger, gravito-elastic, etc.). Generalized models (the zonal-satellite problem, quasihomogeneous fields), as well as special cases (anisotropic Manev-type and Schwarzschild-type models, Popovici or Popovici-Manev photogravitational problem), were also tackled. The methods used in such studies are various: analytical (using mainly the theory of perturbations, but also other theories: functions of complex variable, variational calculus, etc.), geometric (qualitative approach of the theory of dynamical systems), and numerical (especially using the Poincaré-section technique). The areas of interest and the general results obtained focus on: exact or approximate analytical solutions; characteristics of local flows (especially at limit situations: collision and escape); quasiperiodic and periodic orbits; equilibria; symmetries; chaoticity; geometric description of the global flow (and physical interpretation of the phase-space structure). We emphasize some special features, which cannot be met within the Newtonian framework: black-hole effect, oscillatory collisions, radial librations, bounded orbits for nonnegative energy, existence of unstable circular motion (or unstable rest), symmetric periodic orbits within anisotropic models, etc.
On pressure measurement and seasonal pressure variations during the Phoenix mission
NASA Astrophysics Data System (ADS)
Taylor, Peter A.; Kahanpää, Henrik; Weng, Wensong; Akingunola, Ayodeji; Cook, Clive; Daly, Mike; Dickinson, Cameron; Harri, Ari-Matti; Hill, Darren; Hipkin, Victoria; Polkko, Jouni; Whiteway, Jim
2010-03-01
In situ surface pressures measured at 2 s intervals during the 150 sol Phoenix mission are presented and seasonal variations discussed. The lightweight Barocap®/Thermocap® pressure sensor system performed moderately well. However, the original data processing routine had problems because the thermal environment of the sensor was subject to more rapid variations than had been expected. Hence, the data processing routine was updated after Phoenix landed. Further evaluation and the development of a correction are needed since the temperature dependences of the Barocap sensor heads have drifted after the calibration of the sensor. The inaccuracy caused by this appears when the temperature of the unit rises above 0°C. This frequently affects data in the afternoons and precludes a full study of diurnal pressure variations at this time. Short-term fluctuations, on time scales of order 20 s are unaffected and are reported in a separate paper in this issue. Seasonal variations are not significantly affected by this problem and show general agreement with previous measurements from Mars. During the 151 sol mission the surface pressure dropped from around 860 Pa to a minimum (daily average) of 724 Pa on sol 140 (Ls 143). This local minimum occurred several sols earlier than expected based on GCM studies and Viking data. Since battery power was lost on sol 151 we are not sure if the timing of the minimum that we saw could have been advanced by a low-pressure meteorological event. On sol 95 (Ls 122), we also saw a relatively low-pressure feature. This was accompanied by a large number of vertical vortex events, characterized by short, localized (in time), low-pressure perturbations.
Path-space variational inference for non-equilibrium coarse-grained systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics; Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr
In this paper we discuss information-theoretic tools for obtaining optimized coarse-grained molecular models for both equilibrium and non-equilibrium molecular simulations. The latter are ubiquitous in physicochemical and biological applications, where they are typically associated with coupling mechanisms, multi-physics and/or boundary conditions. In general the non-equilibrium steady states are not known explicitly as they do not necessarily have a Gibbs structure. The presented approach can compare microscopic behavior of molecular systems to parametric and non-parametric coarse-grained models using the relative entropy between distributions on the path space and setting up a corresponding path-space variational inference problem. The methods can become entirelymore » data-driven when the microscopic dynamics are replaced with corresponding correlated data in the form of time series. Furthermore, we present connections and generalizations of force matching methods in coarse-graining with path-space information methods. We demonstrate the enhanced transferability of information-based parameterizations to different observables, at a specific thermodynamic point, due to information inequalities. We discuss methodological connections between information-based coarse-graining of molecular systems and variational inference methods primarily developed in the machine learning community. However, we note that the work presented here addresses variational inference for correlated time series due to the focus on dynamics. The applicability of the proposed methods is demonstrated on high-dimensional stochastic processes given by overdamped and driven Langevin dynamics of interacting particles.« less
NASA Astrophysics Data System (ADS)
Golubeva, Elena
2016-10-01
Variations in the solar magnetic-field ratio over 13 years are analyzed, relying on the comparison of simultaneous measurements in two spectral lines at the Mount Wilson Observatory (MWO). The ratio and correlation coefficient are calculated over the general working range of measured magnetic-field values and in various ranges of the field magnitudes. Variations in both parameters are considered. We found the following tendencies: i) the parameters show changes with the cycle of solar activity in the general case; ii) their dependence on magnetic-field magnitude is a nonlinear function of time, and this is especially pronounced in the ratio behavior; iii) several separate ranges of the field magnitudes can be distinguished based on the behavioral patterns of the ratio variations. Correspondences between these ranges and the known structural objects of the solar atmosphere are discussed. This permits us to reach the conclusion that the dependence of parameters considered on the magnetic-field magnitude and time is connected with the variety of magnetic structural components and their cyclic rearrangements. The results represented may be useful for solving interpretation problems of solar magnetic-field measurements and for the cross-calibration of applicable instruments. They can also be of interest for tasks related to the creation of a uniform long temporal series of solar magnetic-field data from various sources.
Waylen, A; Mahmoud, O; Wills, A K; Sell, D; Sandy, J R; Ness, A R
2017-06-01
The aims of this study were to describe child behavioural and psychosocial outcomes associated with appearance and speech in the Cleft Care UK (CCUK) study. We also wanted to explore centre-level variation in child outcomes and investigate individual predictors of such outcomes. Two hundred and sixty-eight five-year-old children with non-syndromic unilateral cleft lip and palate (UCLP) recruited to CCUK. Parents completed the Strengths and Difficulties questionnaire (SDQ) and reported their own perceptions of the child's self-confidence. Child facial appearance and symmetry were assessed using photographs, and intelligibility of speech was derived from audio-visual speech recordings. Centre-level variation in behavioural and psychosocial outcomes was examined using hierarchical models, and associations with clinical outcomes were examined using logit regression models. Children with UCLP had a higher hyperactive difficulty score than the general population. For boys, the average score was 4.5 vs 4.1 (P=.03), and for girls, the average score was 3.8 vs 3.1 (P=.008). There was no evidence of centre-level variation for behaviour or parental perceptions of the child's self-confidence. There is no evidence of associations between self-confidence and SDQ scores and either facial appearance or behaviour. Children born with UCLP have higher levels of behaviour problems than the general population. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Variational Principles for Buckling of Microtubules Modeled as Nonlocal Orthotropic Shells
2014-01-01
A variational principle for microtubules subject to a buckling load is derived by semi-inverse method. The microtubule is modeled as an orthotropic shell with the constitutive equations based on nonlocal elastic theory and the effect of filament network taken into account as an elastic surrounding. Microtubules can carry large compressive forces by virtue of the mechanical coupling between the microtubules and the surrounding elastic filament network. The equations governing the buckling of the microtubule are given by a system of three partial differential equations. The problem studied in the present work involves the derivation of the variational formulation for microtubule buckling. The Rayleigh quotient for the buckling load as well as the natural and geometric boundary conditions of the problem is obtained from this variational formulation. It is observed that the boundary conditions are coupled as a result of nonlocal formulation. It is noted that the analytic solution of the buckling problem for microtubules is usually a difficult task. The variational formulation of the problem provides the basis for a number of approximate and numerical methods of solutions and furthermore variational principles can provide physical insight into the problem. PMID:25214886
González-Díaz, Humberto; Arrasate, Sonia; Gómez-SanJuan, Asier; Sotomayor, Nuria; Lete, Esther; Besada-Porto, Lina; Ruso, Juan M
2013-01-01
In general perturbation methods starts with a known exact solution of a problem and add "small" variation terms in order to approach to a solution for a related problem without known exact solution. Perturbation theory has been widely used in almost all areas of science. Bhor's quantum model, Heisenberg's matrix mechanincs, Feyman diagrams, and Poincare's chaos model or "butterfly effect" in complex systems are examples of perturbation theories. On the other hand, the study of Quantitative Structure-Property Relationships (QSPR) in molecular complex systems is an ideal area for the application of perturbation theory. There are several problems with exact experimental solutions (new chemical reactions, physicochemical properties, drug activity and distribution, metabolic networks, etc.) in public databases like CHEMBL. However, in all these cases, we have an even larger list of related problems without known solutions. We need to know the change in all these properties after a perturbation of initial boundary conditions. It means, when we test large sets of similar, but different, compounds and/or chemical reactions under the slightly different conditions (temperature, time, solvents, enzymes, assays, protein targets, tissues, partition systems, organisms, etc.). However, to the best of our knowledge, there is no QSPR general-purpose perturbation theory to solve this problem. In this work, firstly we review general aspects and applications of both perturbation theory and QSPR models. Secondly, we formulate a general-purpose perturbation theory for multiple-boundary QSPR problems. Last, we develop three new QSPR-Perturbation theory models. The first model classify correctly >100,000 pairs of intra-molecular carbolithiations with 75-95% of Accuracy (Ac), Sensitivity (Sn), and Specificity (Sp). The model predicts probabilities of variations in the yield and enantiomeric excess of reactions due to at least one perturbation in boundary conditions (solvent, temperature, temperature of addition, or time of reaction). The model also account for changes in chemical structure (connectivity structure and/or chirality paterns in substrate, product, electrophile agent, organolithium, and ligand of the asymmetric catalyst). The second model classifies more than 150,000 cases with 85-100% of Ac, Sn, and Sp. The data contains experimental shifts in up to 18 different pharmacological parameters determined in >3000 assays of ADMET (Absorption, Distribution, Metabolism, Elimination, and Toxicity) properties and/or interactions between 31723 drugs and 100 targets (metabolizing enzymes, drug transporters, or organisms). The third model classifies more than 260,000 cases of perturbations in the self-aggregation of drugs and surfactants to form micelles with Ac, Sn, and Sp of 94-95%. The model predicts changes in 8 physicochemical and/or thermodynamics output parameters (critic micelle concentration, aggregation number, degree of ionization, surface area, enthalpy, free energy, entropy, heat capacity) of self-aggregation due to perturbations. The perturbations refers to changes in initial temperature, solvent, salt, salt concentration, solvent, and/or structure of the anion or cation of more than 150 different drugs and surfactants. QSPR-Perturbation Theory models may be useful for multi-objective optimization of organic synthesis, physicochemical properties, biological activity, metabolism, and distribution profiles towards the design of new drugs, surfactants, asymmetric ligands for catalysts, and other materials.
ERIC Educational Resources Information Center
Taylor, Wendy; Stacey, Kaye
2014-01-01
This article presents "The Two Children Problem," published by Martin Gardner, who wrote a famous and widely-read math puzzle column in the magazine "Scientific American," and a problem presented by puzzler Gary Foshee. This paper explains the paradox of Problems 2 and 3 and many other variations of the theme. Then the authors…
On the classification of the spectrally stable standing waves of the Hartree problem
NASA Astrophysics Data System (ADS)
Georgiev, Vladimir; Stefanov, Atanas
2018-05-01
We consider the fractional Hartree model, with general power non-linearity and arbitrary spatial dimension. We construct variationally the "normalized" solutions for the corresponding Choquard-Pekar model-in particular a number of key properties, like smoothness and bell-shapedness are established. As a consequence of the construction, we show that these solitons are spectrally stable as solutions to the time-dependent Hartree model. In addition, we analyze the spectral stability of the Moroz-Van Schaftingen solitons of the classical Hartree problem, in any dimensions and power non-linearity. A full classification is obtained, the main conclusion of which is that only and exactly the "normalized" solutions (which exist only in a portion of the range) are spectrally stable.
On a Minimum Problem in Smectic Elastomers
NASA Astrophysics Data System (ADS)
Buonsanti, Michele; Giovine, Pasquale
2008-07-01
Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress.
Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model
NASA Astrophysics Data System (ADS)
Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I.
2008-12-01
Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.
Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model.
Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I
2008-12-01
Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
Evolutionary and mechanistic theories of aging.
Hughes, Kimberly A; Reynolds, Rose M
2005-01-01
Senescence (aging) is defined as a decline in performance and fitness with advancing age. Senescence is a nearly universal feature of multicellular organisms, and understanding why it occurs is a long-standing problem in biology. Here we present a concise review of both evolutionary and mechanistic theories of aging. We describe the development of the general evolutionary theory, along with the mutation accumulation, antagonistic pleiotropy, and disposable soma versions of the evolutionary model. The review of the mechanistic theories focuses on the oxidative stress resistance, cellular signaling, and dietary control mechanisms of life span extension. We close with a discussion of how an approach that makes use of both evolutionary and molecular analyses can address a critical question: Which of the mechanisms that can cause variation in aging actually do cause variation in natural populations?
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
1993-04-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
Aerodynamics of Engine-Airframe Interaction
NASA Technical Reports Server (NTRS)
Caughey, D. A.
1986-01-01
The report describes progress in research directed towards the efficient solution of the inviscid Euler and Reynolds-averaged Navier-Stokes equations for transonic flows through engine inlets, and past complete aircraft configurations, with emphasis on the flowfields in the vicinity of engine inlets. The research focusses upon the development of solution-adaptive grid procedures for these problems, and the development of multi-grid algorithms in conjunction with both, implicit and explicit time-stepping schemes for the solution of three-dimensional problems. The work includes further development of mesh systems suitable for inlet and wing-fuselage-inlet geometries using a variational approach. Work during this reporting period concentrated upon two-dimensional problems, and has been in two general areas: (1) the development of solution-adaptive procedures to cluster the grid cells in regions of high (truncation) error;and (2) the development of a multigrid scheme for solution of the two-dimensional Euler equations using a diagonalized alternating direction implicit (ADI) smoothing algorithm.
Nonlinear problems of the theory of heterogeneous slightly curved shells
NASA Technical Reports Server (NTRS)
Kantor, B. Y.
1973-01-01
An account if given of the variational method of the solution of physically and geometrically nonlinear problems of the theory of heterogeneous slightly curved shells. Examined are the bending and supercritical behavior of plates and conical and spherical cupolas of variable thickness in a temperature field, taking into account the dependence of the elastic parameters on temperature. The bending, stability in general and load-bearing capacity of flexible isotropic elastic-plastic shells with different criteria of plasticity, taking into account compressibility and hardening. The effect of the plastic heterogeneity caused by heat treatment, surface work hardening and irradiation by fast neutron flux is investigated. Some problems of the dynamic behavior of flexible shells are solved. Calculations are performed in high approximations. Considerable attention is given to the construction of a machine algorithm and to the checking of the convergence of iterative processes.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1993-01-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
The Glimm scheme for perfect fluids on plane-symmetric Gowdy spacetimes
NASA Astrophysics Data System (ADS)
Barnes, A. P.; Lefloch, P. G.; Schmidt, B. G.; Stewart, J. M.
2004-11-01
We propose a new, augmented formulation of the coupled Euler Einstein equations for perfect fluids on plane-symmetric Gowdy spacetimes. The unknowns of the augmented system are the density and velocity of the fluid and the first- and second-order spacetime derivatives of the metric. We solve the Riemann problem for the augmented system, allowing propagating discontinuities in both the fluid variables and the first- and second-order derivatives of the geometry coefficients. Our main result, based on Glimm's random choice scheme, is the existence of solutions with bounded total variation of the Euler Einstein equations, up to the first time where a blow-up singularity (unbounded first-order derivatives of the geometry coefficients) occurs. We demonstrate the relevance of the augmented system for numerical relativity. We also consider general vacuum spacetimes and solve a Riemann problem, by relying on a theorem by Rendall on the characteristic value problem for the Einstein equations.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Yang, Jie; Swenson, Nathan G; Zhang, Guocheng; Ci, Xiuqin; Cao, Min; Sha, Liqing; Li, Jie; Ferry Slik, J W; Lin, Luxiang
2015-08-03
The relative degree to which stochastic and deterministic processes underpin community assembly is a central problem in ecology. Quantifying local-scale phylogenetic and functional beta diversity may shed new light on this problem. We used species distribution, soil, trait and phylogenetic data to quantify whether environmental distance, geographic distance or their combination are the strongest predictors of phylogenetic and functional beta diversity on local scales in a 20-ha tropical seasonal rainforest dynamics plot in southwest China. The patterns of phylogenetic and functional beta diversity were generally consistent. The phylogenetic and functional dissimilarity between subplots (10 × 10 m, 20 × 20 m, 50 × 50 m and 100 × 100 m) was often higher than that expected by chance. The turnover of lineages and species function within habitats was generally slower than that across habitats. Partitioning the variation in phylogenetic and functional beta diversity showed that environmental distance was generally a better predictor of beta diversity than geographic distance thereby lending relatively more support for deterministic environmental filtering over stochastic processes. Overall, our results highlight that deterministic processes play a stronger role than stochastic processes in structuring community composition in this diverse assemblage of tropical trees.
L{sup {infinity}} Variational Problems with Running Costs and Constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aronsson, G., E-mail: gunnar.aronsson@liu.se; Barron, E. N., E-mail: enbarron@math.luc.edu
2012-02-15
Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.
Generalized Kinetic Description of Steady-State Collisionless Plasmas
NASA Technical Reports Server (NTRS)
Khazanov, G. V.; Liemohn, M. W.; Krivorutsky, E. N.
1997-01-01
We present a general solution to the collisionless Boltzmann (Vlasov) equation for a free-flowing plasma along a magnetic field line using Liouville's theorem, allowing for an arbitrary potential structure including non-monotonicities. The constraints of the existing collisionless kinetic transport models are explored, and the need for a more general approach to the problem of self- consistent potential energy calculations is described. Then a technique that handles an arbitrary potential energy distribution along the field line is presented and discussed. For precipitation of magnetospherically trapped hot plasma, this model yields moment calculations that vary by up to a factor of two for various potential energy structures with the same total potential drop. The differences are much greater for the high-latitude outflow scenario, giving order of magnitude variations depending on the shape of the potential energy distribution.
Linear spin-2 fields in most general backgrounds
NASA Astrophysics Data System (ADS)
Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael
2016-04-01
We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.
Björkman, Ingeborg; Berg, Johanna; Viberg, Nina; Stålsby Lundborg, Cecilia
2013-03-01
To improve education and information for general practitioners in relation to rational antibiotic prescribing for urinary tract infection (UTI), it is important to be aware of GPs' views of resistance and how it influences their choice of UTI treatment. The aim of this study was to explore variations in views of resistance and UTI treatment decisions among general practitioners (GPs) in a county in Sweden. Qualitative, semi-structured interviews were analysed with a phenomenographic approach and content analysis. Primary care in Kronoberg, a county in southern Sweden. Subjects. A purposeful sample of 20 GPs from 15 of 25 health centres in the county. The variation of perceptions of antibiotic resistance in UTI treatment. How UTIs were treated according to the GPs. Three different ways of viewing resistance in UTI treatment were identified. These were: (A) No problem, I have never seen resistance, (B) The problem is bigger somewhere else, and (C) The development of antibiotic resistance is serious and we must be careful. Moreover, GPs' perceptions of antibiotic resistance were mirrored in how they reported their treatment of UTIs in practice. There was a hierarchal scale of how GPs viewed resistance as an issue in UTI treatment. Only GPs who expressed concerns about resistance followed prescribing guidelines completely. This offers valuable insights into the planning and most likely the outcome of awareness or educational activities aimed at changed antibiotic prescribing behaviour.
Quality of ground water in Idaho
Yee, Johnson J.; Souza, William R.
1987-01-01
The major aquifers in Idaho are categorized under two rock types, sedimentary and volcanic, and are grouped into six hydrologic basins. Areas with adequate, minimally adequate, or deficient data available for groundwater-quality evaluations are described. Wide variations in chemical concentrations in the water occur within individual aquifers, as well as among the aquifers. The existing data base is not sufficient to describe fully the ground-water quality throughout the State; however, it does indicate that the water is generally suitable for most uses. In some aquifers, concentrations of fluoride, cadmium, and iron in the water exceed the U.S. Environmental Protection Agency's drinking-water standards. Dissolved solids, chloride, and sulfate may cause problems in some local areas. Water-quality data are sparse in many areas, and only general statements can be made regarding the areal distribution of chemical constituents. Few data are available to describe temporal variations of water quality in the aquifers. Primary concerns related to special problem areas in Idaho include (1) protection of water quality in the Rathdrum Prairie aquifer, (2) potential degradation of water quality in the Boise-Nampa area, (3) effects of widespread use of drain wells overlying the eastern Snake River Plain basalt aquifer, and (4) disposal of low-level radioactive wastes at the Idaho National Engineering Laboratory. Shortcomings in the ground-water-quality data base are categorized as (1) multiaquifer sample inadequacy, (2) constituent coverage limitations, (3) baseline-data deficiencies, and (4) data-base nonuniformity.
Analog "neuronal" networks in early vision.
Koch, C; Marroquin, J; Yuille, A
1986-01-01
Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172
Variational algorithms for nonlinear smoothing applications
NASA Technical Reports Server (NTRS)
Bach, R. E., Jr.
1977-01-01
A variational approach is presented for solving a nonlinear, fixed-interval smoothing problem with application to offline processing of noisy data for trajectory reconstruction and parameter estimation. The nonlinear problem is solved as a sequence of linear two-point boundary value problems. Second-order convergence properties are demonstrated. Algorithms for both continuous and discrete versions of the problem are given, and example solutions are provided.
Nonlinear Schrödinger equations with single power nonlinearity and harmonic potential
NASA Astrophysics Data System (ADS)
Cipolatti, R.; de Macedo Lira, Y.; Trallero-Giner, C.
2018-03-01
We consider a generalized nonlinear Schrödinger equation (GNLS) with a single power nonlinearity of the form λ ≤ft\\vert \\varphi \\right\\vert p , with p > 0 and λ\\in{R} , in the presence of a harmonic confinement. We report the conditions that p and λ must fulfill for the existence and uniqueness of ground states of the GNLS. We discuss the Cauchy problem and summarize which conditions are required for the nonlinear term λ ≤ft\\vert \\varphi \\right\\vert p to render the ground state solutions orbitally stable. Based on a new variational method we provide exact formulæ for the minimum energy for each index p and the changing range of values of the nonlinear parameter λ. Also, we report an approximate close analytical expression for the ground state energy, performing a comparative analysis of the present variational calculations with those obtained by a generalized Thomas-Fermi approach, and soliton solutions for the respective ranges of p and λ where these solutions can be implemented to describe the minimum energy.
Transport properties at fluids interfaces: a molecular study for a macroscopic modelling
NASA Astrophysics Data System (ADS)
Russo, Antonio; Morciano, Matteo; Sibley, David N.; Nold, Andreas; Goddard, Benjamin D.; Asinari, Pietro; Kalliadasis, Serafim
2017-11-01
Rapid developments in the field of micro- and nano-fluidics require detailed analysis of the properties of matter at the molecular level. But despite numerous works in the literature, appropriate macroscopic relations able to integrate a microscopic description of fluid and soft matter properties at liquid-vapour and multi-fluid interfaces are missing. As a consequence, studies on interfacial phenomena and micro-device designs often rely on oversimplified assumptions, e.g. that the viscosities can be considered constant across interfaces. In our work, we present non-equilibrium MD simulations to scrutinise efficiently and systematically, through the tools of statistical mechanics, the anisotropic properties of fluids, namely density variations, stress tensor, and shear viscosity, at the fluid interfaces between liquid and vapour and between two partially miscible fluids. Our analysis has led to the formulation of a general relation between shear viscosity and density variations validated for a wide spectrum of interfacial fluid problems. In addition, it provides a rational description of other interfacial quantities of interest, including surface tension and its origins, and more generally, it offers valuable insight of molecular transport phenomena at interfaces.
[Progress in genetic research of human height].
Chen, Kaixu; Wang, Weilan; Zhang, Fuchun; Zheng, Xiufen
2015-08-01
It is well known that both environmental and genetic factors contribute to adult height variation in general population. However, heritability studies have shown that the variation in height is more affected by genetic factors. Height is a typical polygenic trait which has been studied by traditional linkage analysis and association analysis to identify common DNA sequence variation associated with height, but progress has been slow. More recently, with the development of genotyping and DNA sequencing technologies, tremendous achievements have been made in genetic research of human height. Hundreds of single nucleotide polymorphisms (SNPs) associated with human height have been identified and validated with the application of genome-wide association studies (GWAS) methodology, which deepens our understanding of the genetics of human growth and development and also provides theoretic basis and reference for studying other complex human traits. In this review, we summarize recent progress in genetic research of human height and discuss problems and prospects in this research area which may provide some insights into future genetic studies of human height.
Copy Number Variations Detection: Unravelling the Problem in Tangible Aspects.
do Nascimento, Francisco; Guimaraes, Katia S
2017-01-01
In the midst of the important genomic variants associated to the susceptibility and resistance to complex diseases, Copy Number Variations (CNV) has emerged as a prevalent class of structural variation. Following the flood of next-generation sequencing data, numerous tools publicly available have been developed to provide computational strategies to identify CNV at improved accuracy. This review goes beyond scrutinizing the main approaches widely used for structural variants detection in general, including Split-Read, Paired-End Mapping, Read-Depth, and Assembly-based. In this paper, (1) we characterize the relevant technical details around the detection of CNV, which can affect the estimation of breakpoints and number of copies, (2) we pinpoint the most important insights related to GC-content and mappability biases, and (3) we discuss the paramount caveats in the tools evaluation process. The points brought out in this study emphasize common assumptions, a variety of possible limitations, valuable insights, and directions for desirable contributions to the state-of-the-art in CNV detection tools.
Carcass Functions in Variational Calculations for Few-Body Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donchev, A.G.; Kalachev, S.A.; Kolesnikov, N.N.
For variational calculations of molecular and nuclear systems involving a few particles, it is proposed to use carcass basis functions that generalize exponential and Gaussian trial functions. It is shown that the matrix elements of the Hamiltonian are expressed in a closed form for a Coulomb potential, as well as for other popular particle-interaction potentials. The use of such carcass functions in two-center Coulomb problems reduces, in relation to other methods, the number of terms in a variational expansion by a few orders of magnitude at a commensurate or even higher accuracy. The efficiency of the method is illustrated bymore » calculations of the three-particle Coulomb systems {mu}{mu}e, ppe, dde, and tte and the four-particle molecular systems H{sub 2} and HeH{sup +} of various isotopic composition. By considering the example of the {sub {lambda}}{sup 9}Be hypernucleus, it is shown that the proposed method can be used in calculating nuclear systems as well.« less
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
Total variation-based neutron computed tomography
NASA Astrophysics Data System (ADS)
Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick
2018-05-01
We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.
Variational formulation for Black-Scholes equations in stochastic volatility models
NASA Astrophysics Data System (ADS)
Gyulov, Tihomir B.; Valkov, Radoslav L.
2012-11-01
In this note we prove existence and uniqueness of weak solutions to a boundary value problem arising from stochastic volatility models in financial mathematics. Our settings are variational in weighted Sobolev spaces. Nevertheless, as it will become apparent our variational formulation agrees well with the stochastic part of the problem.
Palmer, Maeve A.; O’Connell, Niamh E.
2015-01-01
Simple Summary Dairy cow lameness is a major problem for the industry, causing reduced animal welfare and economic loss. Digital dermatitis (DD) is a bacterial disease causing painful lesions, generally on the heels of the rear feet, and is an important cause of lameness. There appears to be individual variation between animals in susceptibility to this disease. Particular physical, physiological and behavioural factors might influence individual susceptibility, but further work is required to clarify the influence of these factors and to determine how this information could be used to develop breeding and management strategies to reduce DD prevalence. Abstract Digital dermatitis (DD) is a bacterial disease that primarily affects the skin on the heels of cattle. It is a major cause of lameness in dairy cows and a significant problem for the dairy industry in many countries, causing reduced animal welfare and economic loss. A wide range of infection levels has been found on infected farms, prompting investigations into both farm level and animal level risk factors for DD occurrence. There also appears to be individual variation between animals in susceptibility to the disease. The identification of factors affecting individual variation in susceptibility to DD might allow changes in breeding policies or herd management which could be used to reduce DD prevalence. Factors mentioned in the literature as possibly influencing individual variation in susceptibility to DD include physical factors such as hoof conformation and properties of the skin, physiological factors such as the efficacy of the immune response, and behavioural factors such as standing half in cubicles. Further work is required to determine the influence of these factors, identify the genetic basis of variation, clarify the level of heritability of DD susceptibility and to determine how this is correlated with production and health traits currently used in breeding programmes. PMID:26479371
Singularities of the quad curl problem
NASA Astrophysics Data System (ADS)
Nicaise, Serge
2018-04-01
We consider the quad curl problem in smooth and non smooth domains of the space. We first give an augmented variational formulation equivalent to the one from [25] if the datum is divergence free. We describe the singularities of the variational space which correspond to the ones of the Maxwell system with perfectly conducting boundary conditions. The edge and corner singularities of the solution of the corresponding boundary value problem with smooth data are also characterized. We finally obtain some regularity results of the variational solution.
NASA Technical Reports Server (NTRS)
Saltzman, Barry
1992-01-01
The development of a theory of the evolution of the climate of the earth over millions of years can be subdivided into three fundamental, nested, problems: (1) to establish by equilibrium climate models (e.g., general circulation models) the diagnostic relations, valid at any time, between the fast-response climate variables (i.e., the 'weather statistics') and both the prescribed external radiative forcing and the prescribed distribution of the slow response variables (e.g., the ice sheets and shelves, the deep ocean state, and the atmospheric CO2 concentration); (2) to construct, by an essentially inductive process, a model of the time-dependent evolution of the slow-response climatic variables over time scales longer than the damping times of these variables but shorter than the time scale of tectonic changes in the boundary conditions (e.g., altered geography and elevation of the continents, slow outgassing, and weathering) and ultra-slow astronomical changes such as in the solar radiative output; and (3) to determine the nature of these ultra-slow processes and their effects on the evolution of the equilibrium state of the climatic system about which the above time-dependent variations occur. All three problems are discussed in the context of the theory of the Quaternary climate, which will be incomplete unless it is embedded in a more general theory for the fuller Cenozoic that can accommodate the onset of the ice-age fluctuations. We construct a simple mathematical model for the Late Cenozoic climatic changes based on the hypothesis that forced and free variations of the concentration of atmospheric greenhouse gases (notably CO2), coupled with changes in the deep ocean state and ice mass, under the additional 'pacemaking' influence of earth-orbital forcing, are primary determinants of the climate state over this period. Our goal is to illustrate how a single model governing both very long term variations and higher frequency oscillatory variations in the Pleistocene can be formulated with relatively few adjustable parameters.
Quasistatic Evolution in Perfect Plasticity for General Heterogeneous Materials
NASA Astrophysics Data System (ADS)
Solombrino, Francesco
2014-04-01
Inspired by some recent developments in the theory of small-strain heterogeneous elastoplasticity, we both revisit and generalize the formulation of the quasistatic evolutionary problem in perfect plasticity given by Francfort and Giacomini (Commun Pure Appl Math, 65:1185-1241, 2012). We show that their definition of the plastic dissipation measure is equivalent to an abstract one, where it is defined as the supremum of the dualities between the deviatoric parts of admissible stress fields and the plastic strains. By means of this abstract definition, a viscoplastic approximation and variational techniques from the theory of rate-independent processes give the existence of an evolution satisfying an energy-dissipation balance and consequently Hill's maximum plastic work principle for an abstract and very large class of yield conditions.
Investigating the Conceptual Variation of Major Physics Textbooks
NASA Astrophysics Data System (ADS)
Stewart, John; Campbell, Richard; Clanton, Jessica
2008-04-01
The conceptual problem content of the electricity and magnetism chapters of seven major physics textbooks was investigated. The textbooks presented a total of 1600 conceptual electricity and magnetism problems. The solution to each problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content among the set of topics common to the texts. The variation of the distribution of conceptual coverage within each text is studied. The variation between the major groupings of the textbooks (conceptual, algebra-based, and calculus-based) is also studied. A measure of the conceptual complexity of the problems in each text is presented.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
NASA Astrophysics Data System (ADS)
Kazmi, K. R.; Khan, F. A.
2008-01-01
In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].
Hinault, T; Lemaire, P
2016-01-01
In this review, we provide an overview of how age-related changes in executive control influence aging effects in arithmetic processing. More specifically, we consider the role of executive control in strategic variations with age during arithmetic problem solving. Previous studies found that age-related differences in arithmetic performance are associated with strategic variations. That is, when they accomplish arithmetic problem-solving tasks, older adults use fewer strategies than young adults, use strategies in different proportions, and select and execute strategies less efficiently. Here, we review recent evidence, suggesting that age-related changes in inhibition, cognitive flexibility, and working memory processes underlie age-related changes in strategic variations during arithmetic problem solving. We discuss both behavioral and neural mechanisms underlying age-related changes in these executive control processes. © 2016 Elsevier B.V. All rights reserved.
Implementation of heaters on thermally actuated spacecraft mechanisms
NASA Technical Reports Server (NTRS)
Busch, John D.; Bokaie, Michael D.
1994-01-01
This paper presents general insight into the design and implementation of heaters as used in actuating mechanisms for spacecraft. Problems and considerations that were encountered during development of the Deep Space Probe and Science Experiment (DSPSE) solar array release mechanism are discussed. Obstacles included large expected fluctuations in ambient temperature, variations in voltage supply levels outgassing concerns, heater circuit design, materials selection, and power control options. Successful resolution of these issues helped to establish a methodology which can be applied to many of the heater design challenges found in thermally actuated mechanisms.
Transfer of energy in Camassa-Holm and related models by use of nonunique characteristics
NASA Astrophysics Data System (ADS)
Jamróz, Grzegorz
2017-02-01
We study the propagation of energy density in finite-energy weak solutions of the Camassa-Holm and related equations. Developing the methods based on generalized nonunique characteristics, we show that the parts of energy related to positive and negative slopes are one-sided weakly continuous and of bounded variation, which allows us to define certain measures of dissipation of both parts of energy. The result is a step towards the open problem of uniqueness of dissipative solutions of the Camassa-Holm equation.
Optimal placement of excitations and sensors for verification of large dynamical systems
NASA Technical Reports Server (NTRS)
Salama, M.; Rose, T.; Garba, J.
1987-01-01
The computationally difficult problem of the optimal placement of excitations and sensors to maximize the observed measurements is studied within the framework of combinatorial optimization, and is solved numerically using a variation of the simulated annealing heuristic algorithm. Results of numerical experiments including a square plate and a 960 degrees-of-freedom Control of Flexible Structure (COFS) truss structure, are presented. Though the algorithm produces suboptimal solutions, its generality and simplicity allow the treatment of complex dynamical systems which would otherwise be difficult to handle.
Accurate interlaminar stress recovery from finite element analysis
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Riggs, H. Ronald
1994-01-01
The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.
Hamiltonian stability for weighted measure and generalized Lagrangian mean curvature flow
NASA Astrophysics Data System (ADS)
Kajigaya, Toru; Kunikawa, Keita
2018-06-01
In this paper, we generalize several results for the Hamiltonian stability and the mean curvature flow of Lagrangian submanifolds in a Kähler-Einstein manifold to more general Kähler manifolds including a Fano manifold equipped with a Kähler form ω ∈ 2 πc1(M) by using the method proposed by Behrndt (2011). Namely, we first consider a weighted measure on a Lagrangian submanifold L in a Kähler manifold M and investigate the variational problem of L for the weighted volume functional. We call a stationary point of the weighted volume functional f-minimal, and define the notion of Hamiltonian f-stability as a local minimizer under Hamiltonian deformations. We show such examples naturally appear in a toric Fano manifold. Moreover, we consider the generalized Lagrangian mean curvature flow in a Fano manifold which is introduced by Behrndt and Smoczyk-Wang. We generalize the result of H. Li, and show that if the initial Lagrangian submanifold is a small Hamiltonian deformation of an f-minimal and Hamiltonian f-stable Lagrangian submanifold, then the generalized MCF converges exponentially fast to an f-minimal Lagrangian submanifold.
Radar studies of the atmosphere using spatial and frequency diversity
NASA Astrophysics Data System (ADS)
Yu, Tian-You
This work provides results from a thorough investigation of atmospheric radar imaging including theory, numerical simulations, observational verification, and applications. The theory is generalized to include the existing imaging techniques of coherent radar imaging (CRI) and range imaging (RIM), which are shown to be special cases of three-dimensional imaging (3D Imaging). Mathematically, the problem of atmospheric radar imaging is posed as an inverse problem. In this study, the Fourier, Capon, and maximum entropy (MaxEnt) methods are proposed to solve the inverse problem. After the introduction of the theory, numerical simulations are used to test, validate, and exercise these techniques. Statistical comparisons of the three methods of atmospheric radar imaging are presented for various signal-to-noise ratio (SNR), receiver configuration, and frequency sampling. The MaxEnt method is shown to generally possess the best performance for low SNR. The performance of the Capon method approaches the performance of the MaxEnt method for high SNR. In limited cases, the Capon method actually outperforms the MaxEnt method. The Fourier method generally tends to distort the model structure due to its limited resolution. Experimental justification of CRI and RIM is accomplished using the Middle and Upper (MU) Atmosphere Radar in Japan and the SOUnding SYstem (SOUSY) in Germany, respectively. A special application of CRI to the observation of polar mesosphere summer echoes (PMSE) is used to show direct evidence of wave steepening and possibly explain gravity wave variations associated with PMSE.
NASA Astrophysics Data System (ADS)
Belibassakis, K. A.; Athanassoulis, G. A.
2005-05-01
The consistent coupled-mode theory (Athanassoulis & Belibassakis, J. Fluid Mech. vol. 389, 1999, p. 275) is extended and applied to the hydroelastic analysis of large floating bodies of shallow draught or ice sheets of small and uniform thickness, lying over variable bathymetry regions. A parallel-contour bathymetry is assumed, characterized by a continuous depth function of the form h( {x,y}) {=} h( x ), attaining constant, but possibly different, values in the semi-infinite regions x {<} a and x {>} b. We consider the scattering problem of harmonic, obliquely incident, surface waves, under the combined effects of variable bathymetry and a floating elastic plate, extending from x {=} a to x {=} b and {-} infty {<} y{<}infty . Under the assumption of small-amplitude incident waves and small plate deflections, the hydroelastic problem is formulated within the context of linearized water-wave and thin-elastic-plate theory. The problem is reformulated as a transition problem in a bounded domain, for which an equivalent, Luke-type (unconstrained), variational principle is given. In order to consistently treat the wave field beneath the elastic floating plate, down to the sloping bottom boundary, a complete, local, hydroelastic-mode series expansion of the wave field is used, enhanced by an appropriate sloping-bottom mode. The latter enables the consistent satisfaction of the Neumann bottom-boundary condition on a general topography. By introducing this expansion into the variational principle, an equivalent coupled-mode system of horizontal equations in the plate region (a {≤} x {≤} b) is derived. Boundary conditions are also provided by the variational principle, ensuring the complete matching of the wave field at the vertical interfaces (x{=}a and x{=}b), and the requirements that the edges of the plate are free of moment and shear force. Numerical results concerning floating structures lying over flat, shoaling and corrugated seabeds are presented and compared, and the effects of wave direction, bottom slope and bottom corrugations on the hydroelastic response are presented and discussed. The present method can be easily extended to the fully three-dimensional hydroelastic problem, including bodies or structures characterized by variable thickness (draught), flexural rigidity and mass distributions.
Probabilistic Low-Rank Multitask Learning.
Kong, Yu; Shao, Ming; Li, Kang; Fu, Yun
2018-03-01
In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multitask learning (MTL) that can automatically balance between low-rank and sparsity constraints. The former assumes a low-rank structure of the underlying predictive hypothesis space to explicitly capture the relationship of different tasks and the latter learns the incoherent sparse patterns private to each task. We derive and perform inference via variational Bayesian methods. Experimental results on both regression and classification tasks on real-world applications demonstrate the effectiveness of the proposed method in dealing with the MTL problems.
2016-01-01
Recent studies of children's tool innovation have revealed that there is variation in children's success in middle-childhood. In two individual differences studies, we sought to identify personal characteristics that might predict success on an innovation task. In Study 1, we found that although measures of divergent thinking were related to each other they did not predict innovation success. In Study 2, we measured executive functioning including: inhibition, working memory, attentional flexibility and ill-structured problem-solving. None of these measures predicted innovation, but, innovation was predicted by children's performance on a receptive vocabulary scale that may function as a proxy for general intelligence. We did not find evidence that children's innovation was predicted by specific personal characteristics. PMID:26926280
The concept of physical surface in nuclear matter
NASA Astrophysics Data System (ADS)
Mazilu, Nicolae; Agop, Maricel
2015-02-01
The main point of a physical definition of surface forces in the matter in general, especially in the nuclear matter, is that the curvature of surfaces and its variation should be physically defined. The forces are therefore just the vehicles of introducing physics. The problem of mathematical definition of a surface in term of the curvature parameters thus naturally occurs. The present work addresses this problem in terms of the asymptotic directions of a surface in a point. A physical meaning of these parameters is given, first in terms of inertial forces, then in terms of a differential theory of colors, whereby the space of curvature parameters is identified with the color space. The work concludes with an image of the evolution of a local portion of a surface.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Samanta, Atanu; Jain, Manish; Singh, Abhishek K
2015-08-14
The reported values of bandgap of rutile GeO2 calculated by the standard density functional theory within local-density approximation (LDA)/generalized gradient approximation (GGA) show a wide variation (∼2 eV), whose origin remains unresolved. Here, we investigate the reasons for this variation by studying the electronic structure of rutile-GeO2 using many-body perturbation theory within the GW framework. The bandgap as well as valence bandwidth at Γ-point of rutile phase shows a strong dependence on volume change, which is independent of bandgap underestimation problem of LDA/GGA. This strong dependence originates from a change in hybridization among O-p and Ge-(s and p) orbitals. Furthermore, the parabolic nature of first conduction band along X-Γ-M direction changes towards a linear dispersion with volume expansion.
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Truhlar, Donald G.
1990-01-01
The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
Variational Problems with Long-Range Interaction
NASA Astrophysics Data System (ADS)
Soave, Nicola; Tavares, Hugo; Terracini, Susanna; Zilio, Alessandro
2018-06-01
We consider a class of variational problems for densities that repel each other at a distance. Typical examples are given by the Dirichlet functional and the Rayleigh functional D(u) = \\sum_{i=1}^k \\int_{Ω} |\
Mandt, Ingunn; Horn, Anne Marie; Ekedahl, Anders; Granas, Anne Gerd
2010-03-01
Evidence suggests that prescription intervention frequencies have been found to vary as much as 10-fold among Norwegian pharmacies and among pharmacists within the same pharmacy. To explore community pharmacists' perceptions of how their prescription intervention practices were influenced by their working environment, their technological resources, the physical and social structures of the pharmacies, their relations with colleagues, and to the individual pharmacist's professional skills. Two focus groups consisting of 14 community pharmacists in total, from urban and rural areas in Norway, discussed their working procedures and professional judgments related to prescription interventions. Organizational theories were used as theoretical and analytical frameworks in the study. A framework based on Leavitt's organizational model was to structure our interview guide. The study units were the statements of the individual pharmacists. Recurrent themes were identified and condensed. Two processes describing variations in the dispensing workflow including prescription interventions were derived--an active dispensing process extracting information about the patient's medication from several sources and a fast dispensing process focusing mainly on the information available on the prescription. Both workflow processes were used in the same pharmacies and by the same pharmacist but on different occasions. A pharmacy layout allowing interactions between pharmacist and patients and a convenient organization of technology, layout, pharmacist-patient and pharmacist-coworker transactions at the workplace was essential for detecting and solving prescription problems. Pharmacists limited their contact with general practitioners when they considered the problem a formality and/or when they knew the answers themselves. The combined use of dispensing software and the Internet was a driving force toward more independent and cognitively advanced prescription interventions. Implementation of a general organizational model made it easier to analyze and interpret the pharmacists' intervention practices. Working environment, technology, management and professional skills may all contribute to variations in pharmacists' prescription intervention practices in and between community pharmacies. Copyright 2010 Elsevier Inc. All rights reserved.
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Irrational exuberance for resolved species trees.
Hahn, Matthew W; Nakhleh, Luay
2016-01-01
Phylogenomics has largely succeeded in its aim of accurately inferring species trees, even when there are high levels of discordance among individual gene trees. These resolved species trees can be used to ask many questions about trait evolution, including the direction of change and number of times traits have evolved. However, the mapping of traits onto trees generally uses only a single representation of the species tree, ignoring variation in the gene trees used to construct it. Recognizing that genes underlie traits, these results imply that many traits follow topologies that are discordant with the species topology. As a consequence, standard methods for character mapping will incorrectly infer the number of times a trait has evolved. This phenomenon, dubbed "hemiplasy," poses many problems in analyses of character evolution. Here we outline these problems, explaining where and when they are likely to occur. We offer several ways in which the possible presence of hemiplasy can be diagnosed, and discuss multiple approaches to dealing with the problems presented by underlying gene tree discordance when carrying out character mapping. Finally, we discuss the implications of hemiplasy for general phylogenetic inference, including the possible drawbacks of the widespread push for "resolved" species trees. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.
Galway, Lindsay P; Allen, Diana M; Parkes, Margot W; Takaro, Tim K
2014-03-01
Acute gastro-intestinal illness (AGI) is a major cause of mortality and morbidity worldwide and an important public health problem. Despite the fact that AGI is currently responsible for a huge burden of disease throughout the world, important knowledge gaps exist in terms of its epidemiology. Specifically, an understanding of seasonality and those factors driving seasonal variation remain elusive. This paper aims to assess variation in the incidence of AGI in British Columbia (BC), Canada over an 11-year study period. We assessed variation in AGI dynamics in general, and disaggregated by hydroclimatic regime and drinking water source. We used several different visual and statistical techniques to describe and characterize seasonal and annual patterns in AGI incidence over time. Our results consistently illustrate marked seasonal patterns; seasonality remains when the dataset is disaggregated by hydroclimatic regime and drinking water source; however, differences in the magnitude and timing of the peaks and troughs are noted. We conclude that systematic descriptions of infectious illness dynamics over time is a valuable tool for informing disease prevention strategies and generating hypotheses to guide future research in an era of global environmental change.
Learning Grasp Context Distinctions that Generalize
NASA Technical Reports Server (NTRS)
Platt, Robert; Grupen, Roderic A.; Fagg, Andrew H.
2006-01-01
Control-based approaches to grasp synthesis create grasping behavior by sequencing and combining control primitives. In the absence of any other structure, these approaches must evaluate a large number of feasible control sequences as a function of object shape, object pose, and task. This work explores a new approach to grasp synthesis that limits consideration to variations on a generalized localize-reach-grasp control policy. A new learning algorithm, known as schema structured learning, is used to learn which instantiations of the generalized policy are most likely to lead to a successful grasp in different problem contexts. Two experiments are described where Dexter, a bimanual upper torso, learns to select an appropriate grasp strategy as a function of object eccentricity and orientation. In addition, it is shown that grasp skills learned in this way can generalize to new objects. Results are presented showing that after learning how to grasp a small, representative set of objects, the robot's performance quantitatively improves for similar objects that it has not experienced before.
Power allocation for SWIPT in K-user interference channels using game theory
NASA Astrophysics Data System (ADS)
Wen, Zhigang; Liu, Ying; Liu, Xiaoqing; Li, Shan; Chen, Xianya
2018-12-01
A simultaneous wireless information and power transfer system in interference channels of multi-users is considered. In this system, each transmitter sends one data stream to its targeted receiver, which causes interference to other receivers. Since all transmitter-receiver links want to maximize their own average transmission rate, a power allocation problem under the transmit power constraints and the energy-harvesting constraints is developed. To solve this problem, we propose a game theory framework. Then, we convert the game into a variational inequalities problem by establishing the connection between game theory and variational inequalities and solve the variational inequalities problem. Through theoretical analysis, the existence and uniqueness of Nash equilibrium are both guaranteed by the theory of variational inequalities. A distributed iterative alternating optimization water-filling algorithm is derived, which is proved to converge. Numerical results show that the proposed algorithm reaches fast convergence and achieves a higher sum rate than the unaided scheme.
Interaction of Theory and Practice to Assess External Validity.
Leviton, Laura C; Trujillo, Mathew D
2016-01-18
Variations in local context bedevil the assessment of external validity: the ability to generalize about effects of treatments. For evaluation, the challenges of assessing external validity are intimately tied to the translation and spread of evidence-based interventions. This makes external validity a question for decision makers, who need to determine whether to endorse, fund, or adopt interventions that were found to be effective and how to ensure high quality once they spread. To present the rationale for using theory to assess external validity and the value of more systematic interaction of theory and practice. We review advances in external validity, program theory, practitioner expertise, and local adaptation. Examples are provided for program theory, its adaptation to diverse contexts, and generalizing to contexts that have not yet been studied. The often critical role of practitioner experience is illustrated in these examples. Work is described that the Robert Wood Johnson Foundation is supporting to study treatment variation and context more systematically. Researchers and developers generally see a limited range of contexts in which the intervention is implemented. Individual practitioners see a different and often a wider range of contexts, albeit not a systematic sample. Organized and taken together, however, practitioner experiences can inform external validity by challenging the developers and researchers to consider a wider range of contexts. Researchers have developed a variety of ways to adapt interventions in light of such challenges. In systematic programs of inquiry, as opposed to individual studies, the problems of context can be better addressed. Evaluators have advocated an interaction of theory and practice for many years, but the process can be made more systematic and useful. Systematic interaction can set priorities for assessment of external validity by examining the prevalence and importance of context features and treatment variations. Practitioner interaction with researchers and developers can assist in sharpening program theory, reducing uncertainty about treatment variations that are consistent or inconsistent with the theory, inductively ruling out the ones that are harmful or irrelevant, and helping set priorities for more rigorous study of context and treatment variation. © The Author(s) 2016.
Variations in embodied energy and carbon emission intensities of construction materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Omar, Wan-Mohd-Sabki; School of Environmental Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis; Doh, Jeung-Hwan, E-mail: j.doh@griffith.edu.au
2014-11-15
Identification of parameter variation allows us to conduct more detailed life cycle assessment (LCA) of energy and carbon emission material over their lifecycle. Previous research studies have demonstrated that hybrid LCA (HLCA) can generally overcome the problems of incompleteness and accuracy of embodied energy (EE) and carbon (EC) emission assessment. Unfortunately, the current interpretation and quantification procedure has not been extensively and empirically studied in a qualitative manner, especially in hybridising between the process LCA and I-O LCA. To determine this weakness, this study empirically demonstrates the changes in EE and EC intensities caused by variations to key parameters inmore » material production. Using Australia and Malaysia as a case study, the results are compared with previous hybrid models to identify key parameters and issues. The parameters considered in this study are technological changes, energy tariffs, primary energy factors, disaggregation constant, emission factors, and material price fluctuation. It was found that changes in technological efficiency, energy tariffs and material prices caused significant variations in the model. Finally, the comparison of hybrid models revealed that non-energy intensive materials greatly influence the variations due to high indirect energy and carbon emission in upstream boundary of material production, and as such, any decision related to these materials should be considered carefully. - Highlights: • We investigate the EE and EC intensity variation in Australia and Malaysia. • The influences of parameter variations on hybrid LCA model were evaluated. • Key significant contribution to the EE and EC intensity variation were identified. • High indirect EE and EC content caused significant variation in hybrid LCA models. • Non-energy intensive material caused variation between hybrid LCA models.« less
Nyström, Monica E; Terris, Darcey D; Sparring, Vibeke; Tolf, Sara; Brown, Claire R
2012-01-01
Our objective was to test whether the Structured Problem and Success Inventory (SPI) instrument could capture mental representations of organizational and work-related problems as described by individuals working in health care organizations and to test whether these representations varied according to organizational position. A convenience sample (n = 56) of middle managers (n = 20), lower-level managers (n = 20), and staff (n = 16) from health care organizations in Stockholm (Sweden) attending organizational development courses during 2003-2004 was recruited. Participants used the SPI to describe the 3 most pressing organizational and work-related problems. Data were systematically reviewed to identify problem categories and themes. One hundred sixty-four problems were described, clustered into 13 problem categories. Generally, middle managers focused on organizational factors and managerial responsibilities, whereas lower-level managers and staff focused on operational issues and what others did or ought to do. Furthermore, we observed similarities and variation in perceptions and their association with respondents' position within an organization. Our results support the need for further evaluation of the SPI as a promising tool for health care organizations. Collecting structured inventories of organizational and work-related problems from multiple perspectives may assist in the development of shared understandings of organizational challenges and lead to more effective and efficient processes of solution planning and implementation.
NASA Astrophysics Data System (ADS)
Bunge, H.; Hagelberg, C.; Travis, B.
2002-12-01
EarthScope will deliver data on structure and dynamics of continental North America and the underlying mantle on an unprecedented scale. Indeed, the scope of EarthScope makes its mission comparable to the large remote sensing efforts that are transforming the oceanographic and atmospheric sciences today. Arguably the main impact of new solid Earth observing systems is to transform our use of geodynamic models increasingly from conditions that are data poor to an environment that is data rich. Oceanographers and meteorologists already have made substantial progress in adapting to this environment, by developing new approaches of interpreting oceanographic and atmospheric data objectively through data assimilation methods in their models. However, a similarly rigorous theoretical framework for merging EarthScope derived solid Earth data with geodynamic models has yet to be devised. Here we explore the feasibility of data assimilation in mantle convection studies in an attempt to fit global geodynamic model calculations explicitly to tomographic and tectonic constraints. This is an inverse problem not quite unlike the inverse problem of finding optimal seismic velocity structures faced by seismologists. We derive the generalized inverse of mantle convection from a variational approach and present the adjoint equations of mantle flow. The substantial computational burden associated with solutions to the generalized inverse problem of mantle convection is made feasible using a highly efficient finite element approach based on the 3-D spherical fully parallelized mantle dynamics code TERRA, implemented on a cost-effective topical PC-cluster (geowulf) dedicated specifically to large-scale geophysical simulations. This dedicated geophysical modeling computer allows us to investigate global inverse convection problems having a spatial discretization of less than 50 km throughout the mantle. We present a synthetic high-resolution modeling experiment to demonstrate that mid-Cretaceous mantle structure can be inferred accurately from our inverse approach assuming present-day mantle structure is well-known, even if an initial first guess assumption about the mid-Cretaceous mantle involved only a simple 1-D radial temperature profile. We suggest that geodynamic inverse modeling should make it possible to infer a number of flow parameters from observational constraints of the mantle.
Maximum caliber inference of nonequilibrium processes
NASA Astrophysics Data System (ADS)
Otten, Moritz; Stock, Gerhard
2010-07-01
Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.
Accurate sparse-projection image reconstruction via nonlocal TV regularization.
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.
Counterflow diffusion flames: effects of thermal expansion and non-unity Lewis numbers
NASA Astrophysics Data System (ADS)
Koundinyan, Sushilkumar P.; Matalon, Moshe; Stewart, D. Scott
2018-05-01
In this work we re-examine the counterflow diffusion flame problem focusing in particular on the flame-flow interactions due to thermal expansion and its influence on various flame properties such as flame location, flame temperature, reactant leakage and extinction conditions. The analysis follows two different procedures: an asymptotic approximation for large activation energy chemical reactions, and a direct numerical approach. The asymptotic treatment follows the general theory of Cheatham and Matalon, which consists of a free-boundary problem with jump conditions across the surface representing the reaction sheet, and is well suited for variable-density flows and for mixtures with non-unity and distinct Lewis numbers for the fuel and oxidiser. Due to density variations, the species and energy transport equations are coupled to the Navier-Stokes equations and the problem does not possess an analytical solution. We thus propose and implement a methodology for solving the free-boundary problem numerically. Results based on the asymptotic approximation are then verified against those obtained from the 'exact' numerical integration of the governing equations, comparing predictions of the various flame properties.
A variational theorem for creep with applications to plates and columns
NASA Technical Reports Server (NTRS)
Sanders, J Lyell, Jr; Mccomb, Harvey G , Jr; Schlechte, Floyd R
1958-01-01
A variational theorem is presented for a body undergoing creep. Solutions to problems of the creep behavior of plates, columns, beams, and shells can be obtained by means of the direct methods of the calculus of variations in conjunction with the stated theorem. The application of the theorem is illustrated for plates and columns by the solution of two sample problems.
Variational Methods For Sloshing Problems With Surface Tension
NASA Astrophysics Data System (ADS)
Tan, Chee Han; Carlson, Max; Hohenegger, Christel; Osting, Braxton
2016-11-01
We consider the sloshing problem for an incompressible, inviscid, irrotational fluid in a container, including effects due to surface tension on the free surface. We restrict ourselves to a constant contact angle and we seek time-harmonic solutions of the linearized problem, which describes the time-evolution of the fluid due to a small initial disturbance of the surface at rest. As opposed to the zero surface tension case, where the problem reduces to a partial differential equation for the velocity potential, we obtain a coupled system for the velocity potential and the free surface displacement. We derive a new variational formulation of the coupled problem and establish the existence of solutions using the direct method from the Calculus of Variations. In the limit of zero surface tension, we recover the variational formulation of the classical Steklov eigenvalue problem, as derived by B. A. Troesch. For the particular case of an axially symmetric container, we propose a finite element numerical method for computing the sloshing modes of the coupled system. The scheme is implemented in FEniCS and we obtain a qualitative description of the effect of surface tension on the sloshing modes.
Oran, Omer Faruk; Ider, Yusuf Ziya
2012-08-21
Most algorithms for magnetic resonance electrical impedance tomography (MREIT) concentrate on reconstructing the internal conductivity distribution of a conductive object from the Laplacian of only one component of the magnetic flux density (∇²B(z)) generated by the internal current distribution. In this study, a new algorithm is proposed to solve this ∇²B(z)-based MREIT problem which is mathematically formulated as the steady-state scalar pure convection equation. Numerical methods developed for the solution of the more general convection-diffusion equation are utilized. It is known that the solution of the pure convection equation is numerically unstable if sharp variations of the field variable (in this case conductivity) exist or if there are inconsistent boundary conditions. Various stabilization techniques, based on introducing artificial diffusion, are developed to handle such cases and in this study the streamline upwind Petrov-Galerkin (SUPG) stabilization method is incorporated into the Galerkin weighted residual finite element method (FEM) to numerically solve the MREIT problem. The proposed algorithm is tested with simulated and also experimental data from phantoms. Successful conductivity reconstructions are obtained by solving the related convection equation using the Galerkin weighted residual FEM when there are no sharp variations in the actual conductivity distribution. However, when there is noise in the magnetic flux density data or when there are sharp variations in conductivity, it is found that SUPG stabilization is beneficial.
Conditional Random Fields for Fast, Large-Scale Genome-Wide Association Studies
Huang, Jim C.; Meek, Christopher; Kadie, Carl; Heckerman, David
2011-01-01
Understanding the role of genetic variation in human diseases remains an important problem to be solved in genomics. An important component of such variation consist of variations at single sites in DNA, or single nucleotide polymorphisms (SNPs). Typically, the problem of associating particular SNPs to phenotypes has been confounded by hidden factors such as the presence of population structure, family structure or cryptic relatedness in the sample of individuals being analyzed. Such confounding factors lead to a large number of spurious associations and missed associations. Various statistical methods have been proposed to account for such confounding factors such as linear mixed-effect models (LMMs) or methods that adjust data based on a principal components analysis (PCA), but these methods either suffer from low power or cease to be tractable for larger numbers of individuals in the sample. Here we present a statistical model for conducting genome-wide association studies (GWAS) that accounts for such confounding factors. Our method scales in runtime quadratic in the number of individuals being studied with only a modest loss in statistical power as compared to LMM-based and PCA-based methods when testing on synthetic data that was generated from a generalized LMM. Applying our method to both real and synthetic human genotype/phenotype data, we demonstrate the ability of our model to correct for confounding factors while requiring significantly less runtime relative to LMMs. We have implemented methods for fitting these models, which are available at http://www.microsoft.com/science. PMID:21765897
Adaptive force produced by stress-induced regulation of random variation intensity.
Shimansky, Yury P
2010-08-01
The Darwinian theory of life evolution is capable of explaining the majority of related phenomena. At the same time, the mechanisms of optimizing traits beneficial to a population as a whole but not directly to an individual remain largely unclear. There are also significant problems with explaining the phenomenon of punctuated equilibrium. From another perspective, multiple mechanisms for the regulation of the rate of genetic mutations according to the environmental stress have been discovered, but their precise functional role is not well understood yet. Here a novel mathematical paradigm called a Kinetic-Force Principle (KFP), which can serve as a general basis for biologically plausible optimization methods, is introduced and its rigorous derivation is provided. Based on this principle, it is shown that, if the rate of random changes in a biological system is proportional, even only roughly, to the amount of environmental stress, a virtual force is created, acting in the direction of stress relief. It is demonstrated that KFP can provide important insights into solving the above problems. Evidence is presented in support of a hypothesis that the nature employs KFP for accelerating adaptation in biological systems. A detailed comparison between KFP and the principle of variation and natural selection is presented and their complementarity is revealed. It is concluded that KFP is not a competing alternative, but a powerful addition to the principle of variation and natural selection. It is also shown KFP can be used in multiple ways for adaptation of individual biological organisms.
NASA Astrophysics Data System (ADS)
Kumar, Rajneesh; Singh, Kulwinder; Pathania, Devinder Singh
2017-07-01
The purpose of this paper is to study the variations in temperature, radial and normal displacement, normal stress, shear stress and couple stress in a micropolar thermoelastic solid in the context of fractional order theory of thermoelasticity. Eigen value approach together with Laplace and Hankel transforms are employed to obtain the general solution of the problem. The field variables corresponding to different fractional order theories of thermoelasticity have been obtained in the transformed domain. The general solution is applied to an infinite space subjected to a concentrated load at the origin. To obtained solution in the physical domain numerical inversion technique has been applied and numerically computed results are depicted graphically to analyze the effects of fractional order parameter on the field variables.
Mathematics Competency for Beginning Chemistry Students Through Dimensional Analysis.
Pursell, David P; Forlemu, Neville Y; Anagho, Leonard E
2017-01-01
Mathematics competency in nursing education and practice may be addressed by an instructional variation of the traditional dimensional analysis technique typically presented in beginning chemistry courses. The authors studied 73 beginning chemistry students using the typical dimensional analysis technique and the variation technique. Student quantitative problem-solving performance was evaluated. Students using the variation technique scored significantly better (18.3 of 20 points, p < .0001) on the final examination quantitative titration problem than those who used the typical technique (10.9 of 20 points). American Chemical Society examination scores and in-house assessment indicate that better performing beginning chemistry students were more likely to use the variation technique rather than the typical technique. The variation technique may be useful as an alternative instructional approach to enhance beginning chemistry students' mathematics competency and problem-solving ability in both education and practice. [J Nurs Educ. 2017;56(1):22-26.]. Copyright 2017, SLACK Incorporated.
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method
NASA Astrophysics Data System (ADS)
Fu, Shubin; Gao, Kai
2017-11-01
Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.
Polycyclic aromatic hydrocarbon molecules in astrophysics
NASA Astrophysics Data System (ADS)
Rastogi, Shantanu; Pathak, Amit; Maurya, Anju
2013-06-01
Polycyclic aromatic hydrocarbon (PAH) molecules are responsible for the mid-infrared emission features. Their ubiquitous presence in almost all types of astrophysical environments and related variations in their spectral profilesmake them an important tool to understand the physics and chemistry of the interstellar medium. The observed spectrum is generally a composite superposition of all different types of PAHs possible in the region. In the era of space telescopes the spectral richness of the emission features has enhanced their importance as probe and also the need to understand the variations with respect to PAH size, type and ionic state. Quantum computational studies of PAHs have proved useful in elucidating the profile variations and put constraints on the possible types of PAHs in different environments. The study of PAHs has also significantly contributed to the problems of diffuse interstellar bands (DIBs), UV extinction and understanding the chemistry of the formation of complex organics in space. The review highlights the results of various computational models for the understanding of infrared emission features, the PAH-DIB relation, formation of prebiotics and possible impact in the understanding of far-infrared features.
A contemporary look at Hermann Hankel's 1861 pioneering work on Lagrangian fluid dynamics
NASA Astrophysics Data System (ADS)
Frisch, Uriel; Grimberg, Gérard; Villone, Barbara
2017-12-01
The present paper is a companion to the paper by Villone and Rampf (2017), titled "Hermann Hankel's On the general theory of motion of fluids, an essay including an English translation of the complete Preisschrift from 1861" together with connected documents [Eur. Phys. J. H 42, 557-609 (2017)]. Here we give a critical assessment of Hankel's work, which covers many important aspects of fluid dynamics considered from a Lagrangian-coordinates point of view: variational formulation in the spirit of Hamilton for elastic (barotropic) fluids, transport (we would now say Lie transport) of vorticity, the Lagrangian significance of Clebsch variables, etc. Hankel's work is also put in the perspective of previous and future work. Hence, the action spans about two centuries: from Lagrange's 1760-1761 Turin paper on variational approaches to mechanics and fluid mechanics problems to Arnold's 1966 founding paper on the geometrical/variational formulation of incompressible flow. The 22-year-old Hankel - who was to die 12 years later — emerges as a highly innovative master of mathematical fluid dynamics, fully deserving Riemann's assessment that his Preisschrift contains "all manner of good things."
Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.
Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C
2011-03-01
Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.
Financing public health: diminished funding for core needs and state-by-state variation in support.
Levi, Jeffrey; Juliano, Chrissie; Richardson, Maxwell
2007-01-01
This article documents the instability and variation in public financing of public health functions at the federal and state levels. Trust for America's Health has charted federal funding for the Centers of Disease Control and Prevention, which in turn provides a major portion of financing for state and local public health departments, and has compiled information about state-generated revenue commitments to public health activities nationwide. The federal-level analysis shows that funding has been marked by diminished support for "core" public health functions. The state-level analysis shows tremendous variation in use of state revenues to support public health functions. The combination of these factors results in very different public health capacities across the country, potentially leaving some states more vulnerable, while simultaneously posing a general threat to the nation since public health problems do not honor state borders. On the basis of this analysis, the authors suggest changes in the financing arrangements for public health, designed to assure a more stable funding stream for core public health functions and a more consistent approach to financing public health activities across the country.
A dynamic unilateral contact problem with adhesion and friction in viscoelasticity
NASA Astrophysics Data System (ADS)
Cocou, Marius; Schryve, Mathieu; Raous, Michel
2010-08-01
The aim of this paper is to study an interaction law coupling recoverable adhesion, friction and unilateral contact between two viscoelastic bodies of Kelvin-Voigt type. A dynamic contact problem with adhesion and nonlocal friction is considered and its variational formulation is written as the coupling between an implicit variational inequality and a parabolic variational inequality describing the evolution of the intensity of adhesion. The existence and approximation of variational solutions are analysed, based on a penalty method, some abstract results and compactness properties. Finally, some numerical examples are presented.
NASA Astrophysics Data System (ADS)
Farrokhabadi, A.; Mokhtari, J.; Koochi, A.; Abadyan, M.
2015-06-01
In this paper, the impact of the Casimir attraction on the electromechanical stability of nanowire-fabricated nanotweezers is investigated using a theoretical continuum mechanics model. The Dirichlet mode is considered and an asymptotic solution, based on path integral approach, is applied to consider the effect of vacuum fluctuations in the model. The Euler-Bernoulli beam theory is employed to derive the nonlinear governing equation of the nanotweezers. The governing equations are solved by three different approaches, i.e. the modified variation iteration method, generalized differential quadrature method and using a lumped parameter model. Various perspectives of the problem, including the comparison with the van der Waals force regime, the variation of instability parameters and effects of geometry are addressed in present paper. The proposed approach is beneficial for the precise determination of the electrostatic response of the nanotweezers in the presence of Casimir force.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samanta, Atanu; Singh, Abhishek K.; Jain, Manish
2015-08-14
The reported values of bandgap of rutile GeO{sub 2} calculated by the standard density functional theory within local-density approximation (LDA)/generalized gradient approximation (GGA) show a wide variation (∼2 eV), whose origin remains unresolved. Here, we investigate the reasons for this variation by studying the electronic structure of rutile-GeO{sub 2} using many-body perturbation theory within the GW framework. The bandgap as well as valence bandwidth at Γ-point of rutile phase shows a strong dependence on volume change, which is independent of bandgap underestimation problem of LDA/GGA. This strong dependence originates from a change in hybridization among O-p and Ge-(s and p)more » orbitals. Furthermore, the parabolic nature of first conduction band along X-Γ-M direction changes towards a linear dispersion with volume expansion.« less
Face Recognition Using Local Quantized Patterns and Gabor Filters
NASA Astrophysics Data System (ADS)
Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.
2015-05-01
The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.
Mixed formulation for frictionless contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Kim, Kyun O.
1989-01-01
Simple mixed finite element models and a computational precedure are presented for the solution of frictionless contact problems. The analytical formulation is based on a form of Reissner's large rotation theory of the structure with the effects of transverse shear deformation included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the internal forces (stress resultants), the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The element characteristic array are obtained by using a modified form of the two-field Hellinger-Reissner mixed variational principle. The internal forces and the Lagrange multipliers are allowed to be discontinuous at interelement boundaries. The Newton-Raphson iterative scheme is used for the solution of the nonlinear algebraic equations, and the determination of the contact area and the contact pressures.
NASA Astrophysics Data System (ADS)
Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.
2012-11-01
Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)
Asymptotic Behaviour of the Ground State of Singularly Perturbed Elliptic Equations
NASA Astrophysics Data System (ADS)
Piatnitski, Andrey L.
The ground state of a singularly perturbed nonselfadjoint elliptic operator
Conservation of wave action. [in discrete oscillating system
NASA Technical Reports Server (NTRS)
Hayes, W. D.
1974-01-01
It is pointed out that two basic principles appear in the theory of wave propagation, including the existence of a phase variable and a law governing the intensity, in terms of a conservation law. The concepts underlying such a conservation law are explored. The waves treated are conservative in the sense that they obey equations derivable from a variational principle applied to a Lagrangian functional. A discrete oscillating system is considered. The approach employed also permits in a natural way the definition of a local action density and flux in problems in which the waves are modal or general.
NASA Astrophysics Data System (ADS)
Nowacki, A.; Walker, A. M.; Wookey, J.; Kendall, J.
2012-12-01
The core-mantle boundary (CMB) region is the site of the largest change in properties in the Earth. Moreover, the lowermost mantle above it (known as D″) shows the largest lateral variations in seismic velocity and strength of seismic anisotropy below the upper mantle. It is therefore vital to be able to accurately forward model candidate structures in the lowermost mantle with realistic sensitivity to structure and at the same frequencies at which observations are made. We use the spectral finite-element method to produce synthetic seismograms of ScS waves traversing a model of D″ anisotropy derived from mineralogical texture calculations and show that the seismic discontinuity atop the lowermost mantle varies in character laterally purely as a function of the strength and orientation of anisotropy. The lowermost mantle is widely anisotropic, shown by numerous shear wave splitting studies using waves of dominant frequency ~0.2-1 Hz. Whilst methods exist to model the finite-frequency seismic response of the lowermost mantle, most make the problem computationally efficient by imposing a certain symmetry to the problem, and of those which do not, almost none allow for completely general elasticity. Where low frequencies are simulated to reduce computational cost, it is uncertain whether waves of that frequency have comparable sensitivity to D″ structure as those observed at shorter periods. Currently, therefore, these computational limitations precludes the ability to interpret our observations fully. We present recent developments in taking a general approach to forward-modelling waves in D″. We use a modified version of SPECFEM3D_GLOBE, which uses the spectral finite-element method to model seismic wave propagation in a fully generally-elastic (i.e., 3D-varying, arbitrarily anisotropic) Earth. The calculations are computationally challenging: to approach the frequency of the observations, up to 10,000 processor cores and up to 2 TB of memory are needed. The synthetic seismograms can be directly compared to observations of shear wave splitting or other seismic phenomena and utilise all information from the waveform to accurately interpret D″ structures and elasticity. Using a recent model of mineralogical texture in the lowermost mantle (imposing no symmetry on the type on anisotropy), we model ScS waves traversing D″ in various regions. In this case, no lateral variations in average isotropic velocity exist, though the orientation and strength of anisotropy changes over a range of lengthscales (spherical harmonic degrees ≤128). We note a change in the amplitude (sometimes 0) and polarity (positive to negative) of arrivals which are reflected from the top of D″ (an arrival known as SdS) at ~300 km above the core-mantle boundary, even though no lateral variation exists between the isotropic overlying lower mantle and the anisotropic lowermost mantle. Supported by previous studies, this shows that changes only in anisotropy could be responsible for observed variations in SdS across the globe. Our approach can potentially be used to further model general elasticity at short wavelengths in any region in the Earth.
Minimizing the Sum of Completion Times with Resource Dependant Times
NASA Astrophysics Data System (ADS)
Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe
2008-10-01
We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.
Effect of mass variation on dynamics of tethered system in orbital maneuvering
NASA Astrophysics Data System (ADS)
Sun, Liang; Zhao, Guowei; Huang, Hai
2018-05-01
In orbital maneuvering, the mass variation due to fuel consumption has an obvious impact on the dynamics of tethered system, which cannot be neglected. The contributions of the work are mainly shown in two aspects: 1) the improvement of the model; 2) the analysis of dynamics characteristics. As the mass is variable, and the derivative of the mass is directly considered in the traditional Lagrange equation, the expression of generalized force is complicated. To solve this problem, the coagulated derivative is adopted in the paper; besides, the attitude dynamics equations derived in this paper take into account the effect of mass variation and the drift of orbital trajectory at the same time. The bifurcation phenomenon, the pendular motion angular frequency, and amplitudes of tether vibration revealed in this paper can provide a reference for the parameters and controller design in practical engineering. In the article, a dumbbell model is adopted to analyze the dynamics of tethered system, in which the mass variation of base satellite is fully considered. Considering the practical application, the case of orbital transfer under a transversal thrust is mainly studied. Besides, compared with the analytical solutions of librational angles, the effects of mass variation on stability and librational characteristic are studied. Finally, in order to make an analysis of the effect on vibrational characteristic, a lumped model is introduced, which reveals a strong coupling of librational and vibrational characteristics.
The gourmet ape: evolution and human food preferences.
Krebs, John R
2009-09-01
This review explores the relation between evolution, ecology, and culture in determining human food preferences. The basic physiology and morphology of Homo sapiens sets boundaries to our eating habits, but within these boundaries human food preferences are remarkably varied, both within and between populations. This does not mean that variation is entirely cultural or learned, because genes and culture may coevolve to determine variation in dietary habits. This coevolution has been well elucidated in some cases, such as lactose tolerance (lactase persistence) in adults, but is less well understood in others, such as in favism in the Mediterranean and other regions. Genetic variation in bitter taste sensitivity has been well documented, and it affects food preferences (eg, avoidance of cruciferous vegetables). The selective advantage of this variation is not clear. In African populations, there is an association between insensitivity to bitter taste and the prevalence of malaria, which suggests that insensitivity may have been selected for in regions in which eating bitter plants would confer some protection against malaria. Another, more general, hypothesis is that variation in bitter taste sensitivity has coevolved with the use of spices in cooking, which, in turn, is thought to be a cultural tradition that reduces the dangers of microbial contamination of food. Our evolutionary heritage of food preferences and eating habits leaves us mismatched with the food environments we have created, which leads to problems such as obesity and type 2 diabetes.
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
Accelerated Simulation of Kinetic Transport Using Variational Principles and Sparsity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caflisch, Russel
This project is centered on the development and application of techniques of sparsity and compressed sensing for variational principles, PDEs and physics problems, in particular for kinetic transport. This included derivation of sparse modes for elliptic and parabolic problems coming from variational principles. The research results of this project are on methods for sparsity in differential equations and their applications and on application of sparsity ideas to kinetic transport of plasmas.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
NASA Astrophysics Data System (ADS)
Degtyar, V. G.; Kalashnikov, S. T.; Mokin, Yu. A.
2017-10-01
The paper considers problems of analyzing aerodynamic properties (ADP) of reenetry vehicles (RV) as blunted rotary bodies with small random surface distortions. The interactions of math simulation of surface distortions, selection of tools for predicting ADPs of shaped bodies, evaluation of different-type ADP variations and their adaptation for dynamic problems are analyzed. The possibilities of deterministic and probabilistic approaches to evaluation of ADP variations are considered. The practical value of the probabilistic approach is demonstrated. The examples of extremal deterministic evaluations of ADP variations for a sphere and a sharp cone are given.
Optimal Control of Evolution Mixed Variational Inclusions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx
2013-12-15
Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.
Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S
2005-06-08
In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.
NASA Astrophysics Data System (ADS)
Solomon, Gemma C.; Reimers, Jeffrey R.; Hush, Noel S.
2005-06-01
In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R.; Buenrostro-Mariscal, Raymundo
2017-01-01
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. PMID:28391241
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R; Buenrostro-Mariscal, Raymundo
2017-06-07
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. Copyright © 2017 Montesinos-López et al.
NASA Astrophysics Data System (ADS)
Movchan, A. A.; Sil'chenko, L. G.
2008-02-01
We solve the axisymmetric buckling problem for a circular plate made of a shape memory alloy undergoing reverse martensite transformation under the action of a compressing load, which occurs after the direct martensite transformation under the action of a generally different (extending or compressing) load. The problem was solved without any simplifying assumptions concerning the transverse dimension of the supplementary phase transition region related to buckling. The mathematical problem was reduced to a nonlinear eigenvalue problem. An algorithm for solving this problem was proposed. It was shown that the critical buckling load under the reverse transition, which is obtained by taking into account the evolution of the phase strains, can be many times lower than the same quantity obtained under the assumption that the material behavior is elastic even for the least (martensite) values of the elastic moduli. The critical buckling force decreases with increasing modulus of the load applied at the preliminary stage of direct transition and weakly depends on whether this load was extending or compressing. In shape memory alloys (SMA), mutually related processes of strain and direct (from the austenitic into the martensite phase) or reverse thermoelastic phase transitions may occur. The direct transition occurs under cooling and (or) an increase in stresses and is accompanied by a significant decrease (nearly by a factor of three in titan nickelide) of the Young modulus. If the direct transition occurs under the action of stresses with nonzero deviator, then it is accompanied by accumulation of macroscopic phase strains, whose intensity may reach 8%. Under the reverse transition, which occurs under heating and (or) unloading, the moduli increase and the accumulated strain is removed. For plates compressed in their plane, in the case of uniform temperature distribution over the thickness, one can separate trivial processes under which the strained plate remains plane and the phase ratio has a uniform distribution over the thickness. For sufficiently high compressing loads, the trivial process of uniform compression may become unstable in the sense that, for small perturbations of the plate deflection, temperature, the phase ratio, or the load, the difference between the corresponding perturbed process and the unperturbed process may be significant. The results of several experiments concerning the buckling of SMA elements are given in [1, 2], and the statement and solution of the corresponding boundary value problems can be found in [3-11]. The experimental studies [2] and several analytic solutions obtained for the Shanley column [3, 4], rods [5-7], rectangular plates under direct [8] and reverse [9] transitions showed that the processes of thermoelastic phase transitions can significantly (by several times) decrease the critical buckling loads compared with their elastic values calculated for the less rigid martensite state of the material. Moreover, buckling does not occur in the one-phase martensite state in which the elastic moduli are minimal but in the two-phase state in which the values of the volume fractions of the austenitic and martensite phase are approximately equal to each other. This fact is most astonishing for buckling, studied in the present paper, under the reverse transition in which the Young modulus increases approximately half as much from the beginning of the phase transition to the moment of buckling. In [3-9] and in the present paper, the static buckling criterion is used. Following this criterion, the critical load is defined to be the load such that a nontrivial solution of the corresponding quasistatic problem is possible under the action of this load. If, in the problems of stability of rods and SMA plates, small perturbations of the external load are added to small perturbations of the deflection (the critical force is independent of the amplitude of the latter), then the critical forces vary depending on the value of perturbations of the external load [5, 8, 9]. Thus, in the case of small perturbations of the load, the problem of stability of SMA elements becomes indeterminate. The solution of the stability problem for SMA elements also depends on whether the small perturbations of the phase ratio and the phase strain tensor are taken into account. According to this, the problem of stability of SMA elements can be solved in the framework of several statements (concepts, hypotheses) which differ in the set of quantities whose perturbations are admissible (taken into account) in the process of solving the problem. The variety of these statements applied to the problem of buckling of SMA elements under direct martensite transformation is briefly described in [4, 5]. But, in the problem of buckling under the reverse transformation, some of these statements must be changed. The main question which we should answer when solving the problem of stability of SMA elements is whether small perturbations of the phase ratio (the volume fraction of the martensite phase q) are taken into account, because an appropriate choice significantly varies the results of solving the stability problem. If, under the transition to the adjacent form of equilibrium, the phase ratio of all points of the body is assumed to remain the same, then we deal with the "fixed phase atio" concept. The opposite approach can be classified as the "supplementary phase transition" concept (which occurs under the transition to the adjacent form of equilibrium). It should be noted that, since SMA have temperature hysteresis, the phase ratio in SMA can endure only one-sided small variations. But if we deal with buckling under the inverse transformation, then the variation in the volume fraction of the martensite phase cannot be positive. The phase ratio is not an independent variable, like loads or temperature, but, due to the constitutive relations, its variations occur together with the temperature variations and, in the framework of connected models for a majority of SMA, together with variations in the actual stresses. Therefore, the presence or absence of variations in q is determined by the presence or absence of variations in the temperature, deflection, and load, as well as by the system of constitutive relations used in this particular problem. In the framework of unconnected models which do not take the influence of actual stresses on the phase ratio into account, the "fixed phase ratio" concept corresponds to the case of absence of temperature variations. The variations in the phase ratio may also be absent in connected models in the case of specially chosen values of variations in the temperature and (or) in the external load, as well as in the case of SMA of CuMn type, for which the influence of the actual stresses on the phase compound is absent or negligible. In the framework of the "fixed phase ratio" hypothesis, the stability problem for SMA elements has a solution coinciding in form with the solution of the corresponding elastic problem, with the elastic moduli replaced by the corresponding functions of the phase ratio. In the framework of the supplementary phase transition" concept, the result of solving the stability problem essentially depends on whether the small perturbations of the external loads are taken into account in the process of solving the problem. The point is that, when solving the problem in the connected setting, the supplementary phase transition region occupies, in general, not the entire cross-section of the plate but only part of it, and the location of the boundary of this region depends on the existence and the value of these small perturbations. More precisely, the existence of arbitrarily small perturbations of the actual load can result in finite changes of the configuration of the supplementary phase transition region and hence in finite change of the critical values of the load. Here we must distinguish the "fixed load" hypothesis where no perturbations of the external loads are admitted and the "variable load" hypothesis in the opposite case. The conditions that there no variations in the external loads imply additional equations for determining the boundary of the supplementary phase transition region. If the "supplementary phase transition" concept and the "fixed load" concept are used together, then the solution of the stability problem of SMA is uniquely determined in the same sense as the solution of the elastic stability problem under the static approach. In the framework of the "variable load" concept, the result of solving the stability problem for SMA ceases to be unique. But one can find the upper and lower bounds for the critical forces which correspond to the cases of total absence of the supplementary phase transition: the upper bound corresponds to the critical load coinciding with that determined in the framework of the "fixed phase ratio" concept, and the lower bound corresponds to the case where the entire cross-section of the plate experiences the supplementary phase transition. The first version does not need any additional name, and the second version can be called as the "all-round supplementary phase transition" hypothesis. In the present paper, the above concepts are illustrated by examples of solving problems about axisymmetric buckling of a circular freely supported or rigidly fixed plate experiencing reverse martensite transformation under the action of an external force uniformly distributed over the contour. We find analytic solutions in the framework of all the above-listed statements except for the case of free support in the "fixed load" concept, for which we obtain a numerical solution.
General Tricomi-Rassias problem and oblique derivative problem for generalized Chaplygin equations
NASA Astrophysics Data System (ADS)
Wen, Guochun; Chen, Dechang; Cheng, Xiuzhen
2007-09-01
Many authors have discussed the Tricomi problem for some second order equations of mixed type, which has important applications in gas dynamics. In particular, Bers proposed the Tricomi problem for Chaplygin equations in multiply connected domains [L. Bers, Mathematical Aspects of Subsonic and Transonic Gas Dynamics, Wiley, New York, 1958]. And Rassias proposed the exterior Tricomi problem for mixed equations in a doubly connected domain and proved the uniqueness of solutions for the problem [J.M. Rassias, Lecture Notes on Mixed Type Partial Differential Equations, World Scientific, Singapore, 1990]. In the present paper, we discuss the general Tricomi-Rassias problem for generalized Chaplygin equations. This is one general oblique derivative problem that includes the exterior Tricomi problem as a special case. We first give the representation of solutions of the general Tricomi-Rassias problem, and then prove the uniqueness and existence of solutions for the problem by a new method. In this paper, we shall also discuss another general oblique derivative problem for generalized Chaplygin equations.
The Schwinger Variational Method
NASA Technical Reports Server (NTRS)
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Smoking Initiation and the Iron Law of Demand *
Lillard, Dean R.; Molloy, Eamon; Sfekas, Andrew
2012-01-01
We show, with three longitudinal datasets, that cigarette taxes and prices affect smoking initiation decisions. Previous longitudinal studies have found somewhat mixed results, but generally have not found initiation to be sensitive to increases in price or tax. We show that the lack of statistical significance in previous studies may be at least partially attributed to a lack of policy variation in the time periods studied, truncated behavioral windows, or mis-assignment of price and tax rates in retrospective data (which occurs when one has no information about respondents’ prior state or region of residence in retrospective data). We show how each factor may affect the estimation of initiation models. Our findings suggest several problems that are applicable to initiation behavior generally, particularly those for which individuals’ responses to policy changes may be noisy or small in magnitude. PMID:23220458
Evolutionary cell biology: two origins, one objective.
Lynch, Michael; Field, Mark C; Goodson, Holly V; Malik, Harmit S; Pereira-Leal, José B; Roos, David S; Turkewitz, Aaron P; Sazer, Shelley
2014-12-02
All aspects of biological diversification ultimately trace to evolutionary modifications at the cellular level. This central role of cells frames the basic questions as to how cells work and how cells come to be the way they are. Although these two lines of inquiry lie respectively within the traditional provenance of cell biology and evolutionary biology, a comprehensive synthesis of evolutionary and cell-biological thinking is lacking. We define evolutionary cell biology as the fusion of these two eponymous fields with the theoretical and quantitative branches of biochemistry, biophysics, and population genetics. The key goals are to develop a mechanistic understanding of general evolutionary processes, while specifically infusing cell biology with an evolutionary perspective. The full development of this interdisciplinary field has the potential to solve numerous problems in diverse areas of biology, including the degree to which selection, effectively neutral processes, historical contingencies, and/or constraints at the chemical and biophysical levels dictate patterns of variation for intracellular features. These problems can now be examined at both the within- and among-species levels, with single-cell methodologies even allowing quantification of variation within genotypes. Some results from this emerging field have already had a substantial impact on cell biology, and future findings will significantly influence applications in agriculture, medicine, environmental science, and synthetic biology.
Pérez-Del-Olmo, A; Montero, F E; Fernández, M; Barrett, J; Raga, J A; Kostadinova, A
2010-10-01
We address the effect of spatial scale and temporal variation on model generality when forming predictive models for fish assignment using a new data mining approach, Random Forests (RF), to variable biological markers (parasite community data). Models were implemented for a fish host-parasite system sampled along the Mediterranean and Atlantic coasts of Spain and were validated using independent datasets. We considered 2 basic classification problems in evaluating the importance of variations in parasite infracommunities for assignment of individual fish to their populations of origin: multiclass (2-5 population models, using 2 seasonal replicates from each of the populations) and 2-class task (using 4 seasonal replicates from 1 Atlantic and 1 Mediterranean population each). The main results are that (i) RF are well suited for multiclass population assignment using parasite communities in non-migratory fish; (ii) RF provide an efficient means for model cross-validation on the baseline data and this allows sample size limitations in parasite tag studies to be tackled effectively; (iii) the performance of RF is dependent on the complexity and spatial extent/configuration of the problem; and (iv) the development of predictive models is strongly influenced by seasonal change and this stresses the importance of both temporal replication and model validation in parasite tagging studies.
Evolutionary cell biology: Two origins, one objective
Lynch, Michael; Field, Mark C.; Goodson, Holly V.; Malik, Harmit S.; Pereira-Leal, José B.; Roos, David S.; Turkewitz, Aaron P.; Sazer, Shelley
2014-01-01
All aspects of biological diversification ultimately trace to evolutionary modifications at the cellular level. This central role of cells frames the basic questions as to how cells work and how cells come to be the way they are. Although these two lines of inquiry lie respectively within the traditional provenance of cell biology and evolutionary biology, a comprehensive synthesis of evolutionary and cell-biological thinking is lacking. We define evolutionary cell biology as the fusion of these two eponymous fields with the theoretical and quantitative branches of biochemistry, biophysics, and population genetics. The key goals are to develop a mechanistic understanding of general evolutionary processes, while specifically infusing cell biology with an evolutionary perspective. The full development of this interdisciplinary field has the potential to solve numerous problems in diverse areas of biology, including the degree to which selection, effectively neutral processes, historical contingencies, and/or constraints at the chemical and biophysical levels dictate patterns of variation for intracellular features. These problems can now be examined at both the within- and among-species levels, with single-cell methodologies even allowing quantification of variation within genotypes. Some results from this emerging field have already had a substantial impact on cell biology, and future findings will significantly influence applications in agriculture, medicine, environmental science, and synthetic biology. PMID:25404324
Statistical mapping of count survey data
Royle, J. Andrew; Link, W.A.; Sauer, J.R.; Scott, J. Michael; Heglund, Patricia J.; Morrison, Michael L.; Haufler, Jonathan B.; Wall, William A.
2002-01-01
We apply a Poisson mixed model to the problem of mapping (or predicting) bird relative abundance from counts collected from the North American Breeding Bird Survey (BBS). The model expresses the logarithm of the Poisson mean as a sum of a fixed term (which may depend on habitat variables) and a random effect which accounts for remaining unexplained variation. The random effect is assumed to be spatially correlated, thus providing a more general model than the traditional Poisson regression approach. Consequently, the model is capable of improved prediction when data are autocorrelated. Moreover, formulation of the mapping problem in terms of a statistical model facilitates a wide variety of inference problems which are cumbersome or even impossible using standard methods of mapping. For example, assessment of prediction uncertainty, including the formal comparison of predictions at different locations, or through time, using the model-based prediction variance is straightforward under the Poisson model (not so with many nominally model-free methods). Also, ecologists may generally be interested in quantifying the response of a species to particular habitat covariates or other landscape attributes. Proper accounting for the uncertainty in these estimated effects is crucially dependent on specification of a meaningful statistical model. Finally, the model may be used to aid in sampling design, by modifying the existing sampling plan in a manner which minimizes some variance-based criterion. Model fitting under this model is carried out using a simulation technique known as Markov Chain Monte Carlo. Application of the model is illustrated using Mourning Dove (Zenaida macroura) counts from Pennsylvania BBS routes. We produce both a model-based map depicting relative abundance, and the corresponding map of prediction uncertainty. We briefly address the issue of spatial sampling design under this model. Finally, we close with some discussion of mapping in relation to habitat structure. Although our models were fit in the absence of habitat information, the resulting predictions show a strong inverse relation with a map of forest cover in the state, as expected. Consequently, the results suggest that the correlated random effect in the model is broadly representing ecological variation, and that BBS data may be generally useful for studying bird-habitat relationships, even in the presence of observer errors and other widely recognized deficiencies of the BBS.
Advanced data assimilation in strongly nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Miller, Robert N.; Ghil, Michael; Gauthiez, Francois
1994-01-01
Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.
Does the Newtonian Gravity "Constant" G Vary?
NASA Astrophysics Data System (ADS)
Noerdlinger, Peter D.
2015-08-01
A series of measurements of Newton's gravity constant, G, dating back as far as 1893, yielded widely varying values, the variation greatly exceeding the stated error estimates (Gillies, 1997; Quinn, 2000, Mohr et al 2008). The value of G is usually said to be unrelated to other physics, but we point out that the 8B Solar Neutrino Rate ought to be very sensitive. Improved pulsar timing could also help settle the issue as to whether G really varies. We claim that the variation in measured values over time (1893-2014 C.E.) is a more serious problem than the failure of the error bars to overlap; it appears that challenging or adjusting the error bars hardly masks the underlying disagreement in central values. We have assessed whether variations in the gravitational potential due to (for example) local dark matter (DM) could explain the variations. We find that the required potential fluctuations could transiently accelerate the Solar System and nearby stars to speeds in excess of the Galactic escape speed. Previous theories for the variation in G generally deal with supposed secular variation on a cosmological timescale, or very rapid oscillations whose envelope changes on that scale (Steinhardt and Will 1995). Therefore, these analyses fail to support variations on the timescale of years or spatial scales of order parsecs, which would be required by the data for G. We note that true variations in G would be associated with variations in clock rates (Derevianko and Pospelov 2014; Loeb and Maoz 2015), which could mask changes in orbital dynamics. Geringer-Sameth et al (2014) studied γ-ray emission from the nearby Reticulum dwarf galaxy, which is expected to be free of "ordinary" (stellar, black hole) γ-ray sources and found evidence for DM decay. Bernabei et al (2003) also found evidence for DM penetrating deep underground at Gran Sasso. If, indeed, variations in G can be tied to variations in gravitational potential, we have a new tool to assess the DM density.
A generalized Poisson solver for first-principles device simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less
Computation of three-dimensional nozzle-exhaust flow fields with the GIM code
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Anderson, P. G.
1978-01-01
A methodology is introduced for constructing numerical analogs of the partial differential equations of continuum mechanics. A general formulation is provided which permits classical finite element and many of the finite difference methods to be derived directly. The approach, termed the General Interpolants Method (GIM), can combined the best features of finite element and finite difference methods. A quasi-variational procedure is used to formulate the element equations, to introduce boundary conditions into the method and to provide a natural assembly sequence. A derivation is given in terms of general interpolation functions from this procedure. Example computations for transonic and supersonic flows in two and three dimensions are given to illustrate the utility of GIM. A three-dimensional nozzle-exhaust flow field is solved including interaction with the freestream and a coupled treatment of the shear layer. Potential applications of the GIM code to a variety of computational fluid dynamics problems is then discussed in terms of existing capability or by extension of the methodology.
NASA Astrophysics Data System (ADS)
Singh, Gaurav; Krishnan, Girish
2017-06-01
Fiber reinforced elastomeric enclosures (FREEs) are soft and smart pneumatic actuators that deform in a predetermined fashion upon inflation. This paper analyzes the deformation behavior of FREEs by formulating a simple calculus of variations problem that involves constrained maximization of the enclosed volume. The model accurately captures the deformed shape for FREEs with any general fiber angle orientation, and its relation with actuation pressure, material properties and applied load. First, the accuracy of the model is verified with existing literature and experiments for the popular McKibben pneumatic artificial muscle actuator with two equal and opposite families of helically wrapped fibers. Then, the model is used to predict and experimentally validate the deformation behavior of novel rotating-contracting FREEs, for which no prior literature exist. The generality of the model enables conceptualization of novel FREEs whose fiber orientations vary arbitrarily along the geometry. Furthermore, the model is deemed to be useful in the design synthesis of fiber reinforced elastomeric actuators for general axisymmetric desired motion and output force requirement.
A Duality Theory for Non-convex Problems in the Calculus of Variations
NASA Astrophysics Data System (ADS)
Bouchitté, Guy; Fragalà, Ilaria
2018-07-01
We present a new duality theory for non-convex variational problems, under possibly mixed Dirichlet and Neumann boundary conditions. The dual problem reads nicely as a linear programming problem, and our main result states that there is no duality gap. Further, we provide necessary and sufficient optimality conditions, and we show that our duality principle can be reformulated as a min-max result which is quite useful for numerical implementations. As an example, we illustrate the application of our method to a celebrated free boundary problem. The results were announced in Bouchitté and Fragalà (C R Math Acad Sci Paris 353(4):375-379, 2015).
A Duality Theory for Non-convex Problems in the Calculus of Variations
NASA Astrophysics Data System (ADS)
Bouchitté, Guy; Fragalà, Ilaria
2018-02-01
We present a new duality theory for non-convex variational problems, under possibly mixed Dirichlet and Neumann boundary conditions. The dual problem reads nicely as a linear programming problem, and our main result states that there is no duality gap. Further, we provide necessary and sufficient optimality conditions, and we show that our duality principle can be reformulated as a min-max result which is quite useful for numerical implementations. As an example, we illustrate the application of our method to a celebrated free boundary problem. The results were announced in Bouchitté and Fragalà (C R Math Acad Sci Paris 353(4):375-379, 2015).
Lázaro, A.; Totland, Ø.
2014-01-01
Background and Aims The pollinator-mediated stabilizing selection hypothesis suggests that the specialized pollination system of zygomorphic flowers might cause stabilizing selection, reducing their flower size variation compared with actinomorphic flowers. However, the degree of ecological generalization and of dependence on pollinators varies greatly among species of both flower symmetry types and this may also affect flower size variation. Methods Data on 43 species from two contrasting communities (one alpine and one lowland community) were used to test the relationships and interactions between flower size phenotypic variation, floral symmetry, ecological pollination generalization and species' dependence on pollinators. Key Results Contrary to what was expected, higher flower size variation was found in zygomorphic than in actinomorphic species in the lowland community, and no difference in flower size variation was found between symmetry types in the alpine community. The relationship between floral symmetry and flower size variation depended on ecological generalization and species' dependence on pollinators, although the influence of ecological generalization was only detected in the alpine community. Zygomorphic species that were highly dependent on pollinators and that were ecologically specialized were less variable in flower size than ecologically generalist and selfing zygomorphic species, supporting the pollinator-mediated stabilizing selection hypothesis. However, these relationships were not found in actinomorphic species, probably because they are not dependent on any particular pollinator for efficient pollination and therefore their flower size always shows moderate levels of variation. Conclusions The study suggests that the relationship between flower size variation and floral symmetry may be influenced by population-dependent factors, such as ecological generalization and species' dependence on pollinators. PMID:24838838
Lázaro, A; Totland, O
2014-07-01
The pollinator-mediated stabilizing selection hypothesis suggests that the specialized pollination system of zygomorphic flowers might cause stabilizing selection, reducing their flower size variation compared with actinomorphic flowers. However, the degree of ecological generalization and of dependence on pollinators varies greatly among species of both flower symmetry types and this may also affect flower size variation. Data on 43 species from two contrasting communities (one alpine and one lowland community) were used to test the relationships and interactions between flower size phenotypic variation, floral symmetry, ecological pollination generalization and species' dependence on pollinators. Contrary to what was expected, higher flower size variation was found in zygomorphic than in actinomorphic species in the lowland community, and no difference in flower size variation was found between symmetry types in the alpine community. The relationship between floral symmetry and flower size variation depended on ecological generalization and species' dependence on pollinators, although the influence of ecological generalization was only detected in the alpine community. Zygomorphic species that were highly dependent on pollinators and that were ecologically specialized were less variable in flower size than ecologically generalist and selfing zygomorphic species, supporting the pollinator-mediated stabilizing selection hypothesis. However, these relationships were not found in actinomorphic species, probably because they are not dependent on any particular pollinator for efficient pollination and therefore their flower size always shows moderate levels of variation. The study suggests that the relationship between flower size variation and floral symmetry may be influenced by population-dependent factors, such as ecological generalization and species' dependence on pollinators. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Predictors of childhood immunization completion in a rural population.
Gore, P; Madhavan, S; Curry, D; McClung, G; Castiglia, M; Rosenbluth, S A; Smego, R A
1999-04-01
Despite the availability of effective vaccines, immunization rates among two-year old children continue to be low in many areas of the United States including rural West Virginia. The goal of this study was to identify barriers to childhood immunization in rural West Virginia and determine factors that were important in the completion of the childhood immunization schedule. A telephone survey was used to collect data from a randomly selected sample of 316 mothers, of two-year olds, from 18 rural counties of West Virginia. Results indicated that two-thirds or 65% of the children in the study sample had completed their recommended immunizations by two years of age. Immunization barriers identified in this study include: living in health professional shortage areas, lack of health insurance, negative beliefs and attitudes regarding childhood immunizations, problems accessing the immunization clinic, and a perception of inadequate support from the immunization clinic. Results of the structural equation modeling, using LISREL-8, indicated that 20% of the variation in immunization completion (R2 = 0.197) was explained by attitude towards immunization and perceived support received from the immunization clinic. Furthermore, 42% of the variation in attitude towards immunization (R2 = 0.419) was explained by immunization-related beliefs, and 28% of the variation in immunization-related beliefs (the R2 = 0.277) was explained by general problems faced during immunization and perceived clinic support. The study concluded that positive immunization-related beliefs and attitudes, support from the immunization clinic, and ease of the immunization seeking process are important factors in the timely completion of the childhood immunization schedule.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
Effect of nonzero surface admittance on receptivity and stability of compressible boundary layer
NASA Technical Reports Server (NTRS)
Choudhari, Meelan
1994-01-01
The effect of small-amplitude short-scale variations in surface admittance on the acoustic receptivity and stability of two-dimensional compressible boundary layers is examined. In the linearized limit, the two problems are shown to be related both physically and mathematically. This connection between the two problems is used, in conjunction with some previously reported receptivity results, to infer the modification of stability properties due to surface permeability. Numerical calculations are carried out for a self-similar flat-plate boundary layer at subsonic and low supersonic speeds. Variations in mean suction velocity at the perforated admittance surface can also induce receptivity to an acoustic wave. For a subsonic boundary layer, the dependence of admittance-induced receptivity on the acoustic-wave orientation is significantly different from that of the receptivity produced via mean suction variation. The admittance-induced receptivity is generally independent of the angle of acoustic incidence, except in a relatively narrow range of upstream-traveling waves for which the receptivity becomes weaker. However, this range of angles is precisely that for which the suction-induced receptivity tends to be large. At supersonic Mach numbers, the admittance-induced receptivity to slow acoustic models is relatively weaker than that in the case of the fast acoustic modes. We also find that purely real values for the surface admittance tend to have a destabilizing effect on the evolution of an instability wave over a slightly permeable surface. The limits on the validity of the linearized approximation are also assessed in one specific case.
Forms of null Lagrangians in field theories of continuum mechanics
NASA Astrophysics Data System (ADS)
Kovalev, V. A.; Radaev, Yu. N.
2012-02-01
The divergence representation of a null Lagrangian that is regular in a star-shaped domain is used to obtain its general expression containing field gradients of order ≤ 1 in the case of spacetime of arbitrary dimension. It is shown that for a static three-component field in the three-dimensional space, a null Lagrangian can contain up to 15 independent elements in total. The general form of a null Lagrangian in the four-dimensional Minkowski spacetime is obtained (the number of physical field variables is assumed arbitrary). A complete theory of the null Lagrangian for the n-dimensional spacetime manifold (including the four-dimensional Minkowski spacetime as a special case) is given. Null Lagrangians are then used as a basis for solving an important variational problem of an integrating factor. This problem involves searching for factors that depend on the spacetime variables, field variables, and their gradients and, for a given system of partial differential equations, ensure the equality between the scalar product of a vector multiplier by the system vector and some divergence expression for arbitrary field variables and, hence, allow one to formulate a divergence conservation law on solutions to the system.
NASA Astrophysics Data System (ADS)
D'Ambra, Pasqua; Tartaglione, Gaetano
2015-04-01
Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.
Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method
NASA Astrophysics Data System (ADS)
D'Ambra, Pasqua; Tartaglione, Gaetano
2015-03-01
Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.
A time-parallel approach to strong-constraint four-dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Rao, Vishwas; Sandu, Adrian
2016-05-01
A parallel-in-time algorithm based on an augmented Lagrangian approach is proposed to solve four-dimensional variational (4D-Var) data assimilation problems. The assimilation window is divided into multiple sub-intervals that allows parallelization of cost function and gradient computations. The solutions to the continuity equations across interval boundaries are added as constraints. The augmented Lagrangian approach leads to a different formulation of the variational data assimilation problem than the weakly constrained 4D-Var. A combination of serial and parallel 4D-Vars to increase performance is also explored. The methodology is illustrated on data assimilation problems involving the Lorenz-96 and the shallow water models.
A variational approach to the study of capillary phenomena
NASA Technical Reports Server (NTRS)
Emmer, M.; Gonzalez, E.; Tamanini, I.
1982-01-01
The problem of determining the free surface of a liquid in a capillary tube, and of a liquid drop, sitting first on a horizontal plane and then on more general surfaces is considered. With some modifications, the method applies to the study of pendent drops and of rotating drops as well. The standard capillary problem, i.e. the determination of the free surface of a liquid in a thin tube of general cross section, which resuls from the simultaneous action of surface tension, boundary adhesion and gravity is discussed. It turns out that in this case the existence of the solution surface depends heavily on the validity of a simple geometric condition about the mean curvature of the boundary curve of the cross section of the capillary tube. Some particular examples of physical interest are also be discussed. Liquid drops sitting on or hanging from a fixed horizontal plane are discussed. The symmetry of the solutions (which can actually be proved, as consequence of a general symmetrization argument) now plays the chief role in deriving both the existence and the regularity of energy-minimizing configurations. When symmetry fails (this is the case, for example, when the contact angle between the drop and the plate is not constant, or when the supporting surface is not itself symmetric), then more sophisticated methods must be used. Extensions in this direction are outlined.
46 CFR 111.01-17 - Voltage and frequency variations.
Code of Federal Regulations, 2013 CFR
2013-10-01
....01-17 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS General § 111.01-17 Voltage and frequency variations. Unless otherwise stated, electrical equipment must function at variations of at least ±5 percent of rated frequency...
46 CFR 111.01-17 - Voltage and frequency variations.
Code of Federal Regulations, 2010 CFR
2010-10-01
....01-17 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS General § 111.01-17 Voltage and frequency variations. Unless otherwise stated, electrical equipment must function at variations of at least ±5 percent of rated frequency...
46 CFR 111.01-17 - Voltage and frequency variations.
Code of Federal Regulations, 2011 CFR
2011-10-01
....01-17 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS General § 111.01-17 Voltage and frequency variations. Unless otherwise stated, electrical equipment must function at variations of at least ±5 percent of rated frequency...
46 CFR 111.01-17 - Voltage and frequency variations.
Code of Federal Regulations, 2012 CFR
2012-10-01
....01-17 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS General § 111.01-17 Voltage and frequency variations. Unless otherwise stated, electrical equipment must function at variations of at least ±5 percent of rated frequency...
46 CFR 111.01-17 - Voltage and frequency variations.
Code of Federal Regulations, 2014 CFR
2014-10-01
....01-17 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS General § 111.01-17 Voltage and frequency variations. Unless otherwise stated, electrical equipment must function at variations of at least ±5 percent of rated frequency...
Quantitative imaging of aggregated emulsions.
Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J
2006-02-28
Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.
Matching CT and ultrasound data of the liver by landmark constrained image registration
NASA Astrophysics Data System (ADS)
Olesch, Janine; Papenberg, Nils; Lange, Thomas; Conrad, Matthias; Fischer, Bernd
2009-02-01
In navigated liver surgery the key challenge is the registration of pre-operative planing and intra-operative navigation data. Due to the patients individual anatomy the planning is based on segmented, pre-operative CT scans whereas ultrasound captures the actual intra-operative situation. In this paper we derive a novel method based on variational image registration methods and additional given anatomic landmarks. For the first time we embed the landmark information as inequality hard constraints and thereby allowing for inaccurately placed landmarks. The yielding optimization problem allows to ensure the accuracy of the landmark fit by simultaneous intensity based image registration. Following the discretize-then-optimize approach the overall problem is solved by a generalized Gauss-Newton-method. The upcoming linear system is attacked by the MinRes solver. We demonstrate the applicability of the new approach for clinical data which lead to convincing results.
An Interpersonal Analysis of Pathological Personality Traits in DSM-5
Wright, Aidan G.C.; Pincus, Aaron L.; Hopwood, Christopher J.; Thomas, Katherine M.; Markon, Kristian E.; Krueger, Robert F.
2012-01-01
The proposed changes to the personality disorder section of the DSM-5 places an increased focus on interpersonal impairment as one of the defining features of personality psychopathology. In addition, a proposed trait model has been offered to provide a means of capturing phenotypic variation on the expression of personality disorder. In this study, we subject the proposed DSM-5 traits to interpersonal analysis using the Inventory of Interpersonal Problems – Circumplex scales via the structural summary method for circumplex data. DSM-5 traits were consistently associated with generalized interpersonal dysfunction suggesting that they are maladaptive in nature, the majority of traits demonstrated discriminant validity with prototypical and differentiated interpersonal problem profiles, and conformed well to a priori hypothesized associations. These results are discussed in the context of the DSM-5 proposal and contemporary interpersonal theory, with a particular focus on potential areas for expansion of the DSM-5 trait model. PMID:22589411
ERIC Educational Resources Information Center
Hibel, Leah C.; Granger, Douglas A.; Cicchetti, Dante; Rogosch, Fred
2007-01-01
This study examined associations between medications prescribed to control children's problem behaviors and levels of, and diurnal variation in, salivary cortisol (C), testosterone (T), and dehydroepiandrosterone (DHEA). Saliva was collected in the morning, midday, and afternoon from 432 children ages 6-13 years. Relative to a no-medication…
Sugiyama, Michelle Scalise
2003-12-01
In 1966, Laura Bohannan wrote her classic essay challenging the supposition that great literary works speak to universal human concerns and conditions and, by extension, that human nature is the same everywhere. Her evidence: the Tiv of West Africa interpret Hamlet differently from Westerners. While Bohannan's essay implies that cognitive universality and cultural variation are mutually exclusive phenomena, adaptationist theory suggests otherwise. Adaptive problems ("the human condition") and cognitive adaptations ("human nature") are constant across cultures. What differs between cultures is habitat: owing to environmental variation, the means and information relevant to solving adaptive problems differ from place to place. Thus, we find differences between cultures not because human minds differ in design but largely because human habitats differ in resources and history. On this view, we would expect world literature to express both human universals and cultural particularities. Specifically, we should expect to find literary universality at the macro level (e.g., adaptive problems, cognitive adaptations) and literary variation at the micro level (e.g., local solutions to adaptive problems).
Laminar natural convection from a vertical plate with a step change in wall temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.; Yovanovich, M.M.
1991-05-01
The study of natural convection heat transfer from a vertical flat plate in a quiescent medium has attracted a great deal of interest from many investigators in the past few decades. The plate with various thermal conditions that allow similarity transformations as well as those that are continuous and well defined have been examined. However, practical problems often involve wall conditions that are arbitrary and unknown a priori. To understand and solve problems involving general nonsimilar conditions at the wall, it is useful to investigate problems subjected to a step change in wall temperature. The problems impose a mathematical singularitymore » and severe nonsimilar conditions at the wall. In this paper, a new analytical model that can deal with a discontinuous wall temperature variation is presented. The method results in a set of approximate solutions for temperature and velocity distributions. The validity and accuracy of the model is demonstrated by comparisons with the results of the aforementioned investigators. The agreement is excellent and the results obtained with the solution of this work are remarkably close to existing numerical data of Hayday et al. and the perturbation series solution of Kao.« less
Hartmann, Klaas; Steel, Mike
2006-08-01
The Noah's Ark Problem (NAP) is a comprehensive cost-effectiveness methodology for biodiversity conservation that was introduced by Weitzman (1998) and utilizes the phylogenetic tree containing the taxa of interest to assess biodiversity. Given a set of taxa, each of which has a particular survival probability that can be increased at some cost, the NAP seeks to allocate limited funds to conserving these taxa so that the future expected biodiversity is maximized. Finding optimal solutions using this framework is a computationally difficult problem to which a simple and efficient "greedy" algorithm has been proposed in the literature and applied to conservation problems. We show that, although algorithms of this type cannot produce optimal solutions for the general NAP, there are two restricted scenarios of the NAP for which a greedy algorithm is guaranteed to produce optimal solutions. The first scenario requires the taxa to have equal conservation cost; the second scenario requires an ultrametric tree. The NAP assumes a linear relationship between the funding allocated to conservation of a taxon and the increased survival probability of that taxon. This relationship is briefly investigated and one variation is suggested that can also be solved using a greedy algorithm.
Completed Beltrami-Michell Formulation for Analyzing Radially Symmetrical Bodies
NASA Technical Reports Server (NTRS)
Kaljevic, Igor; Saigal, Sunil; Hopkins, Dale A.; Patnaik, Surya N.
1994-01-01
A force method formulation, the completed Beltrami-Michell formulation (CBMF), has been developed for analyzing boundary value problems in elastic continua. The CBMF is obtained by augmenting the classical Beltrami-Michell formulation with novel boundary compatibility conditions. It can analyze general elastic continua with stress, displacement, or mixed boundary conditions. The CBMF alleviates the limitations of the classical formulation, which can solve stress boundary value problems only. In this report, the CBMF is specialized for plates and shells. All equations of the CBMF, including the boundary compatibility conditions, are derived from the variational formulation of the integrated force method (IFM). These equations are defined only in terms of stresses. Their solution for kinematically stable elastic continua provides stress fields without any reference to displacements. In addition, a stress function formulation for plates and shells is developed by augmenting the classical Airy's formulation with boundary compatibility conditions expressed in terms of the stress function. The versatility of the CBMF and the augmented stress function formulation is demonstrated through analytical solutions of several mixed boundary value problems. The example problems include a composite circular plate and a composite circular cylindrical shell under the simultaneous actions of mechanical and thermal loads.
An historical survey of computational methods in optimal control.
NASA Technical Reports Server (NTRS)
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
NASA Technical Reports Server (NTRS)
Schuler, James J.; Felippa, Carlos A.
1991-01-01
Electromagnetic finite elements are extended based on a variational principle that uses the electromagnetic four potential as primary variable. The variational principle is extended to include the ability to predict a nonlinear current distribution within a conductor. The extension of this theory is first done on a normal conductor and tested on two different problems. In both problems, the geometry remains the same, but the material properties are different. The geometry is that of a 1-D infinite wire. The first problem is merely a linear control case used to validate the new theory. The second problem is made up of linear conductors with varying conductivities. Both problems perform well and predict current densities that are accurate to within a few ten thousandths of a percent of the exact values. The fourth potential is then removed, leaving only the magnetic vector potential, and the variational principle is further extended to predict magnetic potentials, magnetic fields, the number of charge carriers, and the current densities within a superconductor. The new element produces good results for the mean magnetic field, the vector potential, and the number of superconducting charge carriers despite a relatively high system condition number. The element did not perform well in predicting the current density. Numerical problems inherent to this formulation are explored and possible remedies to produce better current predicting finite elements are presented.
Application of EOF/PCA-based methods in the post-processing of GRACE derived water variations
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2010-05-01
Two problems that users of monthly GRACE gravity field solutions face are 1) the presence of correlated noise in the Stokes coefficients that increases with harmonic degree and causes ‘striping', and 2) the fact that different physical signals are overlaid and difficult to separate from each other in the data. These problems are termed the signal-noise separation problem and the signal-signal separation problem. Methods that are based on principal component analysis and empirical orthogonal functions (PCA/EOF) have been frequently proposed to deal with these problems for GRACE. However, different strategies have been applied to different (spatial: global/regional, spectral: global/order-wise, geoid/equivalent water height) representations of the GRACE level 2 data products, leading to differing results and a general feeling that PCA/EOF-based methods are to be applied ‘with care'. In addition, it is known that conventional EOF/PCA methods force separated modes to be orthogonal, and that, on the other hand, to either EOFs or PCs an arbitrary orthogonal rotation can be applied. The aim of this paper is to provide a common theoretical framework and to study the application of PCA/EOF-based methods as a signal separation tool due to post-process GRACE data products. In order to investigate and illustrate the applicability of PCA/EOF-based methods, we have employed them on GRACE level 2 monthly solutions based on the Center for Space Research, University of Texas (CSR/UT) RL04 products and on the ITG-GRACE03 solutions from the University of Bonn, and on various representations of them. Our results show that EOF modes do reveal the dominating annual, semiannual and also long-periodic signals in the global water storage variations, but they also show how choosing different strategies changes the outcome and may lead to unexpected results.
Identification of related gene/protein names based on an HMM of name variations.
Yeganova, L; Smith, L; Wilbur, W J
2004-04-01
Gene and protein names follow few, if any, true naming conventions and are subject to great variation in different occurrences of the same name. This gives rise to two important problems in natural language processing. First, can one locate the names of genes or proteins in free text, and second, can one determine when two names denote the same gene or protein? The first of these problems is a special case of the problem of named entity recognition, while the second is a special case of the problem of automatic term recognition (ATR). We study the second problem, that of gene or protein name variation. Here we describe a system which, given a query gene or protein name, identifies related gene or protein names in a large list. The system is based on a dynamic programming algorithm for sequence alignment in which the mutation matrix is allowed to vary under the control of a fully trainable hidden Markov model.
Kim, Seon-Ha; Jo, Min-Woo; Ock, Minsu; Lee, Sang-Il
2017-11-01
This study aimed to explore dimensions in addition to the 5 dimensions of the 5-level EQ-5D version (EQ-5D-5L) that could satisfactorily explain variation in health-related quality of life (HRQoL) in the general population of South Korea. Domains related to HRQoL were searched through a review of existing HRQoL instruments. Among the 28 potential dimensions, the 5 dimensions of the EQ-5D-5L and 7 additional dimensions (vision, hearing, communication, cognitive function, social relationships, vitality, and sleep) were included. A representative sample of 600 subjects was selected for the survey, which was administered through face-to-face interviews. Subjects were asked to report problems in 12 health dimensions at 5 levels, as well as their self-rated health status using the EuroQol visual analogue scale (EQ-VAS) and a 5-point Likert scale. Among subjects who reported no problems for any of the parameters in the EQ-5D-5L, we analyzed the frequencies of problems in the additional dimensions. A linear regression model with the EQ-VAS as the dependent variable was performed to identify additional significant dimensions. Among respondents who reported full health on the EQ-5D-5L (n=365), 32% reported a problem for at least 1 additional dimension, and 14% reported worse than moderate self-rated health. Regression analysis revealed a R2 of 0.228 for the original EQ-5D-5L dimensions, 0.200 for the new dimensions, and 0.263 for the 12 dimensions together. Among the added dimensions, vitality and sleep were significantly associated with EQ-VAS scores. This study identified significant dimensions for assessing self-rated health among members of the general public, in addition to the 5 dimensions of the EQ-5D-5L. These dimensions could be considered for inclusion in a new preference-based instrument or for developing a country-specific HRQoL instrument.
Variation in ejecta size with ejection velocity
NASA Technical Reports Server (NTRS)
Vickery, Ann M.
1987-01-01
The sizes and ranges of over 25,000 secondary craters around twelve large primaries on three different planets were measured and used to infer the size-velocity distribution of that portion of the primary crater ejecta that produced the secondaries. The ballistic equation for spherical bodies was used to convert the ranges to velocities, and the velocities and crater sizes were used in the appropriate Schmidt-Holsapple scaling relation of estimate ejecta sizes, and the velocity exponent was determined. The latter are generally between -1 and -13, with an average value of about -1.9. Problems with data collection made it impossible to determine a simple, unique relation between size and velocity.
Pressure broadening of the ((dt. mu. )dee)* formation resonances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J.S.; Leon, M.; Padial, N.T.
1988-12-27
The treatment of ((dt..mu..)dee)* formation at high densities as a pressure broadening process is discussed. Cross sections for collisions of the complex (dt..mu..)dee, and of the D/sub 2/ molecule from which it is formed, with the bath molecules have been accurately calculated. These cross sections are used to calculate the collisional width in three variations of the impact approximation that have been proposed for this problem. In general, the quasistatic approximation is shown to satisfy the usual conditions of muon-catalyzed fusion better than does the impact approximation. A preliminary rough treatment is presented to illustrate the quasistatic approximation.
Truchot, J P
1975-12-01
10 Blood acid-base changes were studied at 17 degrees C in immersed crabs (Carcinus maenas) exposed to hypoxic and hyperoxic conditions, by measuring the pH and the CO2 partial pressure, PbCO2, and by calculating the bicarbonate concentration. 20 Hyperoxia first induces a marked respiratory acidosis with a rise of PbCO2. This acidosis is compensated thereafter by a non-ventilatory increase of the blood buffer base concentration. These results are discussed in relation to the general problems concerning the control of the blood acid-base balance in aquatic animals.
The Origins of Diverse Domains of Mathematics: Generalist Genes but Specialist Environments
Kovas, Y.; Petrill, S. A.; Plomin, R.
2009-01-01
The authors assessed 2,502 ten-year-old children, members of 1,251 pairs of twins, on a Web-based battery of problems from 5 diverse aspects of mathematics assessed as part of the U.K. national curriculum. This 1st genetic study into the etiology of variation in different domains of mathematics showed that the heritability estimates were moderate and highly similar across domains and that these genetic influences were mostly general. Environmental factors unique to each twin in a family (rather than shared by the 2 twins) explained most of the remaining variance, and these factors were mostly specific to each domain. PMID:19756208
The Origins of Diverse Domains of Mathematics: Generalist Genes but Specialist Environments.
Kovas, Y; Petrill, S A; Plomin, R
2007-02-01
The authors assessed 2,502 ten-year-old children, members of 1,251 pairs of twins, on a Web-based battery of problems from 5 diverse aspects of mathematics assessed as part of the U.K. national curriculum. This 1st genetic study into the etiology of variation in different domains of mathematics showed that the heritability estimates were moderate and highly similar across domains and that these genetic influences were mostly general. Environmental factors unique to each twin in a family (rather than shared by the 2 twins) explained most of the remaining variance, and these factors were mostly specific to each domain.
Scientific data interpolation with low dimensional manifold model
NASA Astrophysics Data System (ADS)
Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley
2018-01-01
We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
NASA Astrophysics Data System (ADS)
Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan
2017-12-01
We examine students' mathematical performance on quantitative "synthesis problems" with varying mathematical complexity. Synthesis problems are tasks comprising multiple concepts typically taught in different chapters. Mathematical performance refers to the formulation, combination, and simplification of equations. Generally speaking, formulation and combination of equations require conceptual reasoning; simplification of equations requires manipulation of equations as computational tools. Mathematical complexity is operationally defined by the number and the type of equations to be manipulated concurrently due to the number of unknowns in each equation. We use two types of synthesis problems, namely, sequential and simultaneous tasks. Sequential synthesis tasks require a chronological application of pertinent concepts, and simultaneous synthesis tasks require a concurrent application of the pertinent concepts. A total of 179 physics major students from a second year mechanics course participated in the study. Data were collected from written tasks and individual interviews. Results show that mathematical complexity negatively influences the students' mathematical performance on both types of synthesis problems. However, for the sequential synthesis tasks, it interferes only with the students' simplification of equations. For the simultaneous synthesis tasks, mathematical complexity additionally impedes the students' formulation and combination of equations. Several reasons may explain this difference, including the students' different approaches to the two types of synthesis problems, cognitive load, and the variation of mathematical complexity within each synthesis type.
Calibration of Lévy Processes with American Options
NASA Astrophysics Data System (ADS)
Achdou, Yves
We study options on financial assets whose discounted prices are exponential of Lévy processes. The price of an American vanilla option as a function of the maturity and the strike satisfies a linear complementarity problem involving a non-local partial integro-differential operator. It leads to a variational inequality in a suitable weighted Sobolev space. Calibrating the Lévy process may be done by solving an inverse least square problem where the state variable satisfies the previously mentioned variational inequality. We first assume that the volatility is positive: after carefully studying the direct problem, we propose necessary optimality conditions for the least square inverse problem. We also consider the direct problem when the volatility is zero.
Rijlaarsdam, Jolien; van IJzendoorn, Marinus H.; Verhulst, Frank C; Jaddoe, Vincent W. V.; Felix, Janine F.; Tiemeier, Henning; Bakermans-Kranenburg, Marian J.
2017-01-01
Lay Abstract The gene encoding the oxytocin receptor (OXTR), localized on chromosome 3p25, is considered a promising candidate for explaining genetic vulnerability to autistic traits. Although several lines of evidence implicate OXTR SNP rs53576 (G/A) variation in social behavior, findings have been inconsistent, possibly because DNA methylation after stress exposure was eliminated from consideration. This study investigated the main and interactive effects of OXTR rs53576 genotype, stress exposure, and OXTR methylation on child autistic traits. Prenatal maternal stress exposure, but not OXTR rs53576 genotype and OXTR methylation, showed a main effect on child autistic traits. For child autistic traits in general and social communication problems in particular, we observed a significant OXTR rs53576 genotype by OXTR methylation interaction. More specifically, OXTR methylation levels were positively associated with social problems for OXTR rs53576 G-allele homozygous children but not for A-allele carriers. These results highlight the importance of incorporating epi-allelic information and support the role of OXTR methylation in child autistic traits. Scientific Abstract Findings of studies investigating OXTR SNP rs53576 (G-A) variation in social behavior have been inconsistent, possibly because DNA methylation after stress exposure was eliminated from consideration. Our goal was to examine OXTR rs53576 allele-specific sensitivity for neonatal OXTR DNA methylation in relation to (1) a prenatal maternal stress composite, and (2) child autistic traits. Prospective data from fetal life to age 6 years were collected in a total of 743 children participating in the Generation R Study. Prenatal maternal stress exposure was uniquely associated with child autistic traits but was unrelated to OXTR methylation across both OXTR rs53576 G-allele homozygous children and A-allele carriers. For child autistic traits in general and social communication problems in particular, we observed a significant OXTR rs53576 genotype by OXTR methylation interaction in the absence of main effects, suggesting that opposing effects cancelled each other out. Indeed, OXTR methylation levels were positively associated with social problems for OXTR rs53576 G-allele homozygous children but not for A-allele carriers. These results highlight the importance of incorporating epi-allelic information and support the role of OXTR methylation in child autistic traits. PMID:27520745
The initial value problem in Lagrangian drift kinetic theory
NASA Astrophysics Data System (ADS)
Burby, J. W.
2016-06-01
> Existing high-order variational drift kinetic theories contain unphysical rapidly varying modes that are not seen at low orders. These unphysical modes, which may be rapidly oscillating, damped or growing, are ushered in by a failure of conventional high-order drift kinetic theory to preserve the structure of its parent model's initial value problem. In short, the (infinite dimensional) system phase space is unphysically enlarged in conventional high-order variational drift kinetic theory. I present an alternative, `renormalized' variational approach to drift kinetic theory that manifestly respects the parent model's initial value problem. The basic philosophy underlying this alternate approach is that high-order drift kinetic theory ought to be derived by truncating the all-orders system phase-space Lagrangian instead of the usual `field particle' Lagrangian. For the sake of clarity, this story is told first through the lens of a finite-dimensional toy model of high-order variational drift kinetics; the analogous full-on drift kinetic story is discussed subsequently. The renormalized drift kinetic system, while variational and just as formally accurate as conventional formulations, does not support the troublesome rapidly varying modes.
A Dynamic Programming Approach for Base Station Sleeping in Cellular Networks
NASA Astrophysics Data System (ADS)
Gong, Jie; Zhou, Sheng; Niu, Zhisheng
The energy consumption of the information and communication technology (ICT) industry, which has become a serious problem, is mostly due to the network infrastructure rather than the mobile terminals. In this paper, we focus on reducing the energy consumption of base stations (BSs) by adjusting their working modes (active or sleep). Specifically, the objective is to minimize the energy consumption while satisfying quality of service (QoS, e.g., blocking probability) requirement and, at the same time, avoiding frequent mode switching to reduce signaling and delay overhead. The problem is modeled as a dynamic programming (DP) problem, which is NP-hard in general. Based on cooperation among neighboring BSs, a low-complexity algorithm is proposed to reduce the size of state space as well as that of action space. Simulations demonstrate that, with the proposed algorithm, the active BS pattern well meets the time variation and the non-uniform spatial distribution of system traffic. Moreover, the tradeoff between the energy saving from BS sleeping and the cost of switching is well balanced by the proposed scheme.
Quantitative knowledge acquisition for expert systems
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
A common problem in the design of expert systems is the definition of rules from data obtained in system operation or simulation. While it is relatively easy to collect data and to log the comments of human operators engaged in experiments, generalizing such information to a set of rules has not previously been a direct task. A statistical method is presented for generating rule bases from numerical data, motivated by an example based on aircraft navigation with multiple sensors. The specific objective is to design an expert system that selects a satisfactory suite of measurements from a dissimilar, redundant set, given an arbitrary navigation geometry and possible sensor failures. The systematic development is described of a Navigation Sensor Management (NSM) Expert System from Kalman Filter convariance data. The method invokes two statistical techniques: Analysis of Variance (ANOVA) and the ID3 Algorithm. The ANOVA technique indicates whether variations of problem parameters give statistically different covariance results, and the ID3 algorithms identifies the relationships between the problem parameters using probabilistic knowledge extracted from a simulation example set. Both are detailed.
Hamilton-Jacobi theory in multisymplectic classical field theories
NASA Astrophysics Data System (ADS)
de León, Manuel; Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso; Vilariño, Silvia
2017-09-01
The geometric framework for the Hamilton-Jacobi theory developed in the studies of Cariñena et al. [Int. J. Geom. Methods Mod. Phys. 3(7), 1417-1458 (2006)], Cariñena et al. [Int. J. Geom. Methods Mod. Phys. 13(2), 1650017 (2015)], and de León et al. [Variations, Geometry and Physics (Nova Science Publishers, New York, 2009)] is extended for multisymplectic first-order classical field theories. The Hamilton-Jacobi problem is stated for the Lagrangian and the Hamiltonian formalisms of these theories as a particular case of a more general problem, and the classical Hamilton-Jacobi equation for field theories is recovered from this geometrical setting. Particular and complete solutions to these problems are defined and characterized in several equivalent ways in both formalisms, and the equivalence between them is proved. The use of distributions in jet bundles that represent the solutions to the field equations is the fundamental tool in this formulation. Some examples are analyzed and, in particular, the Hamilton-Jacobi equation for non-autonomous mechanical systems is obtained as a special case of our results.
High-speed reacting flow simulation using USA-series codes
NASA Astrophysics Data System (ADS)
Chakravarthy, S. R.; Palaniswamy, S.
In this paper, the finite-rate chemistry (FRC) formulation for the USA-series of codes and three sets of validations are presented. USA-series computational fluid dynamics (CFD) codes are based on Unified Solution Algorithms including explicity and implicit formulations, factorization and relaxation approaches, time marching and space marching methodolgies, etc., in order to be able to solve a very wide class of CDF problems using a single framework. Euler or Navier-Stokes equations are solved using a finite-volume treatment with upwind Total Variation Diminishing discretization for the inviscid terms. Perfect and real gas options are available including equilibrium and nonequilibrium chemistry. This capability has been widely used to study various problems including Space Shuttle exhaust plumes, National Aerospace Plane (NASP) designs, etc. (1) Numerical solutions are presented showing the full range of possible solutions to steady detonation wave problems. (2) Comparison between the solution obtained by the USA code and Generalized Kinetics Analysis Program (GKAP) is shown for supersonic combustion in a duct. (3) Simulation of combustion in a supersonic shear layer is shown to have reasonable agreement with experimental observations.
Multilevel Methods for Elliptic Problems with Highly Varying Coefficients on Nonaligned Coarse Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheichl, Robert; Vassilevski, Panayot S.; Zikatanov, Ludmil T.
2012-06-21
We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of crossmore » points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.« less
NASA Technical Reports Server (NTRS)
Shia, R.-L.; Yung, Y. L.
1986-01-01
The problem of multiple scattering of nonpolarized light in a planetary body of arbitrary shape illuminated by a parallel beam is formulated using the integral equation approach. There exists a simple functional whose stationarity condition is equivalent to solving the equation of radiative transfer and whose value at the stationary point is proportional to the differential cross section. The analysis reveals a direct relation between the microscopic symmetry of the phase function for each scattering event and the macroscopic symmetry of the differential cross section for the entire planetary body, and the interconnection of these symmetry relations and the variational principle. The case of a homogeneous sphere containing isotropic scatterers is investigated in detail. It is shown that the solution can be expanded in a multipole series such that the general spherical problem is reduced to solving a set of decoupled integral equations in one dimension. Computations have been performed for a range of parameters of interest, and illustrative examples of applications to planetary problems as provided.
NASA Astrophysics Data System (ADS)
Kudryavtsev, O.; Rodochenko, V.
2018-03-01
We propose a new general numerical method aimed to solve integro-differential equations with variable coefficients. The problem under consideration arises in finance where in the context of pricing barrier options in a wide class of stochastic volatility models with jumps. To handle the effect of the correlation between the price and the variance, we use a suitable substitution for processes. Then we construct a Markov-chain approximation for the variation process on small time intervals and apply a maturity randomization technique. The result is a system of boundary problems for integro-differential equations with constant coefficients on the line in each vertex of the chain. We solve the arising problems using a numerical Wiener-Hopf factorization method. The approximate formulae for the factors are efficiently implemented by means of the Fast Fourier Transform. Finally, we use a recurrent procedure that moves backwards in time on the variance tree. We demonstrate the convergence of the method using Monte-Carlo simulations and compare our results with the results obtained by the Wiener-Hopf method with closed-form expressions of the factors.
On convergence of solutions to variational-hemivariational inequalities
NASA Astrophysics Data System (ADS)
Zeng, Biao; Liu, Zhenhai; Migórski, Stanisław
2018-06-01
In this paper we investigate the convergence behavior of the solutions to the time-dependent variational-hemivariational inequalities with respect to the data. First, we give an existence and uniqueness result for the problem, and then, deliver a continuous dependence result when all the data are subjected to perturbations. A semipermeability problem is given to illustrate our main results.
ERIC Educational Resources Information Center
Santos, Elvira Santos; Garcia, Irma Cruz Gavilan; Gomez, Eva Florencia Lejarazo; Vilchis-Reyes, Miguel Angel
2010-01-01
A series of experiments based on problem-solving and collaborative-learning pedagogies are described that encourage students to interpret results and draw conclusions from data. Different approaches including parallel library synthesis, solvent variation, and leaving group variation are used to study a nucleophilic aromatic substitution of…
ERIC Educational Resources Information Center
Mahavier, W. Ted
2002-01-01
Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…
NASA Astrophysics Data System (ADS)
Ageev, A. I.; Golubkina, I. V.; Osiptsov, A. N.
2018-01-01
A slow steady flow of a viscous fluid over a superhydrophobic surface with a periodic striped system of 2D rectangular microcavities is considered. The microcavities contain small gas bubbles on the curved surface of which the shear stress vanishes. The general case is analyzed when the bubble occupies only a part of the cavity, and the flow velocity far from the surface is directed at an arbitrary angle to the cavity edge. Due to the linearity of the Stokes flow problem, the solution is split into two parts, corresponding to the flows perpendicular and along the cavities. Two variants of a boundary element method are developed and used to construct numerical solutions on the scale of a single cavity with periodic boundary conditions. By averaging these solutions, the average slip velocity and the slip length tensor components are calculated over a wide range of variation of governing parameters for the cases of a shear-driven flow and a pressure-driven channel flow. For a sufficiently high pressure drop in a microchannel of finite length, the variation of the bubble surface shift into the cavities induced by the streamwise pressure variation is estimated from numerical calculations.
Rapid Detection of Positive Selection in Genes and Genomes Through Variation Clusters
Wagner, Andreas
2007-01-01
Positive selection in genes and genomes can point to the evolutionary basis for differences among species and among races within a species. The detection of positive selection can also help identify functionally important protein regions and thus guide protein engineering. Many existing tests for positive selection are excessively conservative, vulnerable to artifacts caused by demographic population history, or computationally very intensive. I here propose a simple and rapid test that is complementary to existing tests and that overcomes some of these problems. It relies on the null hypothesis that neutrally evolving DNA regions should show a Poisson distribution of nucleotide substitutions. The test detects significant deviations from this expectation in the form of variation clusters, highly localized groups of amino acid changes in a coding region. In applying this test to several thousand human–chimpanzee gene orthologs, I show that such variation clusters are not generally caused by relaxed selection. They occur in well-defined domains of a protein's tertiary structure and show a large excess of amino acid replacement over silent substitutions. I also identify multiple new human–chimpanzee orthologs subject to positive selection, among them genes that are involved in reproductive functions, immune defense, and the nervous system. PMID:17603100
Schrödinger and Dirac solutions to few-body problems
NASA Astrophysics Data System (ADS)
Muolo, Andrea; Reiher, Markus
We elaborate on the variational solution of the Schrödinger and Dirac equations for small atomic and molecular systems without relying on the Born-Oppenheimer approximation. The all-particle equations of motion are solved in a numerical procedure that relies on the variational principle, Cartesian coordinates and parametrized explicitly correlated Gaussians functions. A stochastic optimization of the variational parameters allows the calculation of accurate wave functions for ground and excited states. Expectation values such as the radial and angular distribution functions or the dipole moment can be calculated. We developed a simple strategy for the elimination of the global translation that allows to generally adopt laboratory-fixed cartesian coordinates. Simple expressions for the coordinates and operators are then preserved throughout the formalism. For relativistic calculations we devised a kinetic-balance condition for explicitly correlated basis functions. We demonstrate that the kinetic-balance condition can be obtained from the row reduction process commonly applied to solve systems of linear equations. The resulting form of kinetic balance establishes a relation between all components of the spinor of an N-fermion system. ETH Zürich, Laboratorium für Physikalische Chemie, CH-8093 Zürich, Switzerland.
A review on environmental factors regulating arsenic methylation in humans.
Tseng, Chin-Hsiao
2009-03-15
Subjects exposed to arsenic show significant inter-individual variation in urinary patterns of arsenic metabolites but insignificant day-to-day intra-individual variation. The inter-individual variation in arsenic methylation can be partly responsible for the variation in susceptibility to arsenic toxicity. Wide inter-ethnic variation and family correlation in urinary arsenic profile suggest a genetic effect on arsenic metabolism. In this paper the environmental factors affecting arsenic metabolism are reviewed. Methylation capacity might reduce with increasing dosage of arsenic exposure. Furthermore, women, especially at pregnancy, have better methylation capacity than their men counterparts, probably due to the effect of estrogen. Children might have better methylation capacity than adults and age shows inconsistent relevance in adults. Smoking and alcohol consumption might be associated with a poorer methylation capacity. Nutritional status is important in the methylation capacity and folate may facilitate the methylation and excretion of arsenic. Besides, general health conditions and medications might influence the arsenic methylation capacity; and technical problems can cause biased estimates. The consumption of seafood, seaweed, rice and other food with high arsenic contents and the extent of cooking and arsenic-containing water used in food preparation may also interfere with the presentation of the urinary arsenic profile. Future studies are necessary to clarify the effects of the various arsenic metabolites including the trivalent methylated forms on the development of arsenic-induced human diseases with the consideration of the effects of confounding factors and the interactions with other effect modifiers.
Experiences with online consultation systems in primary care: case study of one early adopter site
Casey, Michael; Shaw, Sara; Swinglehurst, Deborah
2017-01-01
Background There is a strong policy drive towards implementing alternatives to face-to-face consultations in general practice to improve access, efficiency, and cost-effectiveness. These alternatives embrace novel technologies that are assumed to offer potential to improve care. Aim To explore the introduction of one online consultation system (Tele-Doc) and how it shapes working practices. Design and setting Mixed methods case study in an inner-city general practice. Method The study was conducted through interviews with IT developers, clinicians, and administrative staff, and scrutiny of documents, websites, and demonstrator versions of Tele-Doc, followed by thematic analysis and discourse analysis. Results Three interrelated themes were identified: online consultation systems as innovation, managing the ‘messiness’ of general practice consultations, and redistribution of the work of general practice. These themes raise timely questions about what it means to consult in contemporary general practice. Uptake of Tele-Doc by patients was low. Much of the work of the consultation was redistributed to patients and administrators, sometimes causing misunderstandings. The ‘messiness’ of consultations was hard to eliminate. In-house training focused on the technical application rather than associated transformations to practice work that were not anticipated. GPs welcomed varied modes of consulting, but the aspiration of improved efficiency was not realised in practice. Conclusion Tele-Doc offers a new kind of consultation that is still being worked out in practice. It may offer convenience for patients with discrete, single problems, and a welcome variation to GPs’ workload. Tele-Doc’s potential for addressing more complex problems and achieving efficiency is less clear, and its adoption may involve unforeseeable consequences. PMID:28993306
Variation in Strategy Use across Grade Level by Pattern Generalization Types
ERIC Educational Resources Information Center
El Mouhayar, Rabih; Jurdak, Murad
2015-01-01
This paper explored variation of strategy use in pattern generalization across different generalization types and across grade level. A test was designed to assess students' strategy use in pattern generalization tasks. The test was given to a sample of 1232 students from grades 4 to 11 from five schools in Lebanon. The findings of this study…
2018-04-01
systems containing ionized gases. 2. Gibbs Method in the Integral Form As per the Gibbs general methodology , based on the concept of heterogeneous...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1. General
Andersson, Petter; Löfstedt, Christer; Hambäck, Peter A
2013-12-01
Habitat area is an important predictor of spatial variation in animal densities. However, the area often correlates with the quantity of resources within habitats, complicating our understanding of the factors shaping animal distributions. We addressed this problem by investigating densities of insect herbivores in habitat patches with a constant area but varying numbers of plants. Using a mathematical model, predictions of scale-dependent immigration and emigration rates for insects into patches with different densities of host plants were derived. Moreover, a field experiment was conducted where the scaling properties of odour-mediated attraction in relation to the number of odour sources were estimated, in order to derive a prediction of immigration rates of olfactory searchers. The theoretical model predicted that we should expect immigration rates of contact and visual searchers to be determined by patch area, with a steep scaling coefficient, μ = -1. The field experiment suggested that olfactory searchers should show a less steep scaling coefficient, with μ ≈ -0.5. A parameter estimation and analysis of published data revealed a correspondence between observations and predictions, and density-variation among groups could largely be explained by search behaviour. Aphids showed scaling coefficients corresponding to the prediction for contact/visual searchers, whereas moths, flies and beetles corresponded to the prediction for olfactory searchers. As density responses varied considerably among groups, and variation could be explained by a certain trait, we conclude that a general theory of insect responses to habitat heterogeneity should be based on shared traits, rather than a general prediction for all species.
Convergent Close-Coupling Approach to Electron-Atom Collisions
NASA Technical Reports Server (NTRS)
Bray, Igor; Stelbovics, Andris
2007-01-01
It was with great pleasure and honour to accept the invitation to make a presentation at the symposium celebrating the life-long work of Aaron Temkin and Richard Drachman. The work of Aaron Temkin was particularly influential on our own during the development of the CCC method for electron-atom collisions. There are a number of key problems that need to be dealt with when developing a general computational approach to such collisions. Traditionally, the electron energy range was subdivided into the low, intermediate, and high energies. At the low energies only a finite number of channels are open and variational or close-coupling techniques could be used to obtain accurate results. At high energies an infinite number of discrete channels and the target continuum are open, but perturbative techniques are able to yield accurate results. However, at the intermediate energies perturbative techniques fail and computational approaches need to be found for treating the infinite number of open channels. In addition, there are also problems associated with the identical nature of electrons and the difficulty of implementing the boundary conditions for ionization processes. The beauty of the Temkin-Poet model of electron-hydrogen scattering is that it simplifies the full computational problem by neglecting any non-zero orbital angular momenta in the partial-wave expansion, without loosing the complexity associated with the above-mentioned problems. The unique nature of the problem allowed for accurate solution leading to benchmark results which could then be used to test the much more general approaches to electron-atom collision problems. The immense value of the Temkin-Poet model is readily summarised by the fact that the initial papers of Temkin and Poet have been collectively cited around 250 times to date and are still being cited in present times. Many of the citations came from our own work during the course of the development of the CCC method, which we now describe.
Conservation laws with coinciding smooth solutions but different conserved variables
NASA Astrophysics Data System (ADS)
Colombo, Rinaldo M.; Guerra, Graziano
2018-04-01
Consider two hyperbolic systems of conservation laws in one space dimension with the same eigenvalues and (right) eigenvectors. We prove that solutions to Cauchy problems with the same initial data differ at third order in the total variation of the initial datum. As a first application, relying on the classical Glimm-Lax result (Glimm and Lax in Decay of solutions of systems of nonlinear hyperbolic conservation laws. Memoirs of the American Mathematical Society, No. 101. American Mathematical Society, Providence, 1970), we obtain estimates improving those in Saint-Raymond (Arch Ration Mech Anal 155(3):171-199, 2000) on the distance between solutions to the isentropic and non-isentropic inviscid compressible Euler equations, under general equations of state. Further applications are to the general scalar case, where rather precise estimates are obtained, to an approximation by Di Perna of the p-system and to a traffic model.
NASA Astrophysics Data System (ADS)
Ni, Yong; He, Linghui; Khachaturyan, Armen G.
2010-07-01
A phase field method is proposed to determine the equilibrium fields of a magnetoelectroelastic multiferroic with arbitrarily distributed constitutive constants under applied loadings. This method is based on a developed generalized Eshelby's equivalency principle, in which the elastic strain, electrostatic, and magnetostatic fields at the equilibrium in the original heterogeneous system are exactly the same as those in an equivalent homogeneous magnetoelectroelastic coupled or uncoupled system with properly chosen distributed effective eigenstrain, polarization, and magnetization fields. Finding these effective fields fully solves the equilibrium elasticity, electrostatics, and magnetostatics in the original heterogeneous multiferroic. The paper formulates a variational principle proving that the effective fields are minimizers of appropriate close-form energy functional. The proposed phase field approach produces the energy minimizing effective fields (and thus solving the general multiferroic problem) as a result of artificial relaxation process described by the Ginzburg-Landau-Khalatnikov kinetic equations.
A Review of Depth and Normal Fusion Algorithms
Štolc, Svorad; Pock, Thomas
2018-01-01
Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasham, M.J.R.; Sarmiento, J.L.; Slater, R.D.
1993-06-01
One important theme of modern biological oceanography has been the attempt to develop models of how the marine ecosystem responds to variations in the physical forcing functions such as solar radiation and the wind field. The authors have addressed the problem by embedding simple ecosystem models into a seasonally forced three-dimensional general circulation model of the North Atlantic ocean. In this paper first, some of the underlying biological assumptions of the ecosystem model are presented, followed by an analysis of how well the model predicts the seasonal cycle of the biological variables at Bermuda Station s' and Ocean Weather Stationmore » India. The model gives a good overall fit to the observations but does not faithfully model the whole seasonal ecosystem model. 57 refs., 25 figs., 5 tabs.« less
Application of satellite data in variational analysis for global cyclonic systems
NASA Technical Reports Server (NTRS)
Achtemeier, G. L.
1987-01-01
The research goal was a variational data assimilation method that incorporates as dynamical constraints, the primitive equations for a moist, convectively unstable atmosphere and the radiative transfer equation. Variables to be adjusted include the three-dimensional vector wind, height, temperature, and moisture from rawinsonde data, and cloud-wind vectors, moisture, and radiance from satellite data. This presents a formidable mathematical problem. In order to facilitate thorough analysis of each of the model components, four variational models that divide the problem naturally according to increasing complexity are defined. Each model is summarized.
Algorithms Bridging Quantum Computation and Chemistry
NASA Astrophysics Data System (ADS)
McClean, Jarrod Ryan
The design of new materials and chemicals derived entirely from computation has long been a goal of computational chemistry, and the governing equation whose solution would permit this dream is known. Unfortunately, the exact solution to this equation has been far too expensive and clever approximations fail in critical situations. Quantum computers offer a novel solution to this problem. In this work, we develop not only new algorithms to use quantum computers to study hard problems in chemistry, but also explore how such algorithms can help us to better understand and improve our traditional approaches. In particular, we first introduce a new method, the variational quantum eigensolver, which is designed to maximally utilize the quantum resources available in a device to solve chemical problems. We apply this method in a real quantum photonic device in the lab to study the dissociation of the helium hydride (HeH+) molecule. We also enhance this methodology with architecture specific optimizations on ion trap computers and show how linear-scaling techniques from traditional quantum chemistry can be used to improve the outlook of similar algorithms on quantum computers. We then show how studying quantum algorithms such as these can be used to understand and enhance the development of classical algorithms. In particular we use a tool from adiabatic quantum computation, Feynman's Clock, to develop a new discrete time variational principle and further establish a connection between real-time quantum dynamics and ground state eigenvalue problems. We use these tools to develop two novel parallel-in-time quantum algorithms that outperform competitive algorithms as well as offer new insights into the connection between the fermion sign problem of ground states and the dynamical sign problem of quantum dynamics. Finally we use insights gained in the study of quantum circuits to explore a general notion of sparsity in many-body quantum systems. In particular we use developments from the field of compressed sensing to find compact representations of ground states. As an application we study electronic systems and find solutions dramatically more compact than traditional configuration interaction expansions, offering hope to extend this methodology to challenging systems in chemical and material design.
Grzywacz, Joseph G.; Quandt, Sara A.; Chen, Haiying; Isom, Scott; Kiang, Lisa; Vallejos, Quirina; Arcury, Thomas A.
2010-01-01
Immigrant Latino farmworkers confront multiple challenges that threaten their mental health. Previous farmworker mental health research has relied primarily on cross-sectional study designs, leaving little opportunity to describe how farmworker mental health changes or to identify factors that may contribute to these changes. This study used prospective data obtained at monthly intervals across one four-month agricultural season from a large sample of Latino farmworkers in NC (N=288) to document variation in depressive symptoms across the agricultural season and delineate structural and situational factors associated with mental health trajectories across time. Depressive symptoms generally followed a U-shaped distribution across the season, but there was substantial variation in this pattern. Structural stressors like marital status and situational stressors like the pace of work, crowded living conditions, and concerns about documentation predicted depressive symptoms. The pattern of results suggests that strategies to address mental health problems in this vulnerable population will require coordinated action at the individual and social level. PMID:20658876
Grzywacz, Joseph G; Quandt, Sara A; Chen, Haiying; Isom, Scott; Kiang, Lisa; Vallejos, Quirina; Arcury, Thomas A
2010-07-01
Immigrant Latino farmworkers confront multiple challenges that threaten their mental health. Previous farmworker mental health research has relied primarily on cross-sectional study designs, leaving little opportunity to describe how farmworker mental health changes or to identify factors that may contribute to these changes. This study used prospective data obtained at monthly intervals across one 4-month agricultural season from a large sample of Latino farmworkers in North Carolina (N = 288) to document variation in depressive symptoms across the agricultural season and delineate structural and situational factors associated with mental health trajectories across time. Depressive symptoms generally followed a U-shaped distribution across the season, but there was substantial variation in this pattern. Structural stressors like marital status and situational stressors like the pace of work, crowded living conditions, and concerns about documentation predicted depressive symptoms. The pattern of results suggests that strategies to address mental health problems in this vulnerable population will require coordinated action at the individual and social level.
Using a general problem-solving strategy to promote transfer.
Youssef-Shalala, Amina; Ayres, Paul; Schubert, Carina; Sweller, John
2014-09-01
Cognitive load theory was used to hypothesize that a general problem-solving strategy based on a make-as-many-moves-as-possible heuristic could facilitate problem solutions for transfer problems. In four experiments, school students were required to learn about a topic through practice with a general problem-solving strategy, through a conventional problem solving strategy or by studying worked examples. In Experiments 1 and 2 using junior high school students learning geometry, low knowledge students in the general problem-solving group scored significantly higher on near or far transfer tests than the conventional problem-solving group. In Experiment 3, an advantage for a general problem-solving group over a group presented worked examples was obtained on far transfer tests using the same curriculum materials, again presented to junior high school students. No differences between conditions were found in Experiments 1, 2, or 3 using test problems similar to the acquisition problems. Experiment 4 used senior high school students studying economics and found the general problem-solving group scored significantly higher than the conventional problem-solving group on both similar and transfer tests. It was concluded that the general problem-solving strategy was helpful for novices, but not for students that had access to domain-specific knowledge. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Sex Life Satisfaction in Sub-Saharan Africa: A Descriptive and Exploratory Analysis.
Cranney, Stephen
2017-10-01
Nearly all of the sex life satisfaction literature has dealt with developed-country settings, and nothing has been published on sex life satisfaction in sub-Saharan Africa. Not only is sub-Saharan African a substantively relevant area in its own right, but it also provides a useful point of comparison for patterns and relations found in developed-world contexts. A brief descriptive and exploratory study of sex life satisfaction in sub-Saharan Africa was conducted using the World Gallup Poll, a dataset with representative sex life satisfaction data for 31 countries and 25,483 cases. In general, there was little variation in weighted averages across countries, and most of the samples surveyed were satisfied with their sex lives, with the modal score being a perfect 10. Furthermore, what variation did exist could not be attributed to level of economic development or gender inequality. Within countries, sociodemographic associations generally comported with patterns found in other contexts: income, education, and being partnered were generally associated with sex life satisfaction, and for two of the four UN subregions (West Africa and East Africa), males were significantly more satisfied with their sex lives than women. The relationship with age demonstrated a curvilinear relationship, with the peak age of sexual satisfaction in the late 20s to early 30s depending on the geographic region. The age pattern was not due to health differences, but combining estimators after a seemingly unrelated regression suggests that 4-12% of the effect of income on sex life satisfaction was attributable to better health. In general, religiosity and perceived gravity of the HIV/AIDS problem in one's country were not significantly related to sexual satisfaction.
ERIC Educational Resources Information Center
Leithwood, Kenneth; Steinbach, Rosanne
Findings of a study that examined the relationship between variations in patterns of school leadership and group problem-solving process are presented in this paper. Interviews were conducted at the beginning and end of the school year with 12 principals in British Columbia who had implemented the Primary Program. The initiative was designed to…
The Bean model in suprconductivity: Variational formulation and numerical solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prigozhin, L.
The Bean critical-state model describes the penetration of magnetic field into type-II superconductors. Mathematically, this is a free boundary problem and its solution is of interest in applied superconductivity. We derive a variational formulation for the Bean model and use it to solve two-dimensional and axially symmetric critical-state problems numerically. 25 refs., 9 figs., 1 tab.
Townend, Gillian S; Ehrhart, Friederike; van Kranen, Henk J; Wilkinson, Mark; Jacobsen, Annika; Roos, Marco; Willighagen, Egon L; van Enckevort, David; Evelo, Chris T; Curfs, Leopold M G
2018-04-27
Rett syndrome (RTT) is a monogenic rare disorder that causes severe neurological problems. In most cases, it results from a loss-of-function mutation in the gene encoding methyl-CPG-binding protein 2 (MECP2). Currently, about 900 unique MECP2 variations (benign and pathogenic) have been identified and it is suspected that the different mutations contribute to different levels of disease severity. For researchers and clinicians, it is important that genotype-phenotype information is available to identify disease-causing mutations for diagnosis, to aid in clinical management of the disorder, and to provide counseling for parents. In this study, 13 genotype-phenotype databases were surveyed for their general functionality and availability of RTT-specific MECP2 variation data. For each database, we investigated findability and interoperability alongside practical user functionality, and type and amount of genetic and phenotype data. The main conclusions are that, as well as being challenging to find these databases and specific MECP2 variants held within, interoperability is as yet poorly developed and requires effort to search across databases. Nevertheless, we found several thousand online database entries for MECP2 variations and their associated phenotypes, diagnosis, or predicted variant effects, which is a good starting point for researchers and clinicians who want to provide, annotate, and use the data. © 2018 The Authors. Human Mutation published by Wiley Periodicals, Inc.
Isometric deformations of unstretchable material surfaces, a spatial variational treatment
NASA Astrophysics Data System (ADS)
Chen, Yi-Chao; Fosdick, Roger; Fried, Eliot
2018-07-01
The stored energy of an unstretchable material surface is assumed to depend only upon the curvature tensor. By control of its edge(s), the surface is deformed isometrically from its planar undistorted reference configuration into an equilibrium shape. That shape is to be determined from a suitably constrained variational problem as a state of relative minimal potential energy. We pose the variational problem as one of relative minimum potential energy in a spatial form, wherein the deformation of a flat, undistorted region D in E2 to its distorted form S in E3 is assumed specified. We then apply the principle that the first variation of the potential energy, expressed as a functional over S ∪ ∂S , must vanish for all admissible variations that correspond to isometric deformations from the distorted configuration S and that also contain the essence of flatness that characterizes the reference configuration D , but is not covered by the single statement that the variation of S correspond to an isometric deformation. We emphasize the commonly overlooked condition that the spatial expression of the variational problem requires an additional variational constraint of zero Gaussian curvature to ensure that variations from S that are isometric deformations also contain the notion of flatness. In this context, it is particularly revealing to observe that the two constraints produce distinct, but essential and complementary, conditions on the first variation of S. The resulting first variation integral condition, together with the constraints, may be applied, for example, to the case of a flat, undistorted, rectangular strip D that is deformed isometrically into a closed ring S by connecting its short edges and specifying that its long edges are free of loading and, therefore, subject to zero traction and couple traction. The elementary example of a closed ring without twist as a state of relative minimum potential energy is discussed in detail, and the bending of the strip by opposing specific bending moments on its short edges is treated as a particular case. Finally, the constrained variational problem, with the introduction of appropriate constraint reactions as Lagrangian multipliers to account for the requirements that the deformation from D to S is isometric and that D is flat, is formulated in the spatial form, and the associated Euler-Lagrange equations are derived. We then solve the Euler-Lagrange equations for two representative problems in which a planar undistorted rectangular material strip is isometrically deformed by applied edge tractions and couple tractions (i.e., specific edge moments) into (i) a bent and twisted circular cylindrical helical state, and (ii) a state conformal with the surface of a right circular conical form.
Evolution of triangular topographic facets along active normal faults
NASA Astrophysics Data System (ADS)
Balogun, A.; Dawers, N. H.; Gasparini, N. M.; Giachetta, E.
2011-12-01
Triangular shaped facets, which are generally formed by the erosion of fault - bounded mountain ranges, are arguably one of the most prominent geomorphic features on active normal fault scarps. Some previous studies of triangular facet development have suggested that facet size and slope exhibit a strong linear dependency on fault slip rate, thus linking their growth directly to the kinematics of fault initiation and linkage. Other studies, however, generally conclude that there is no variation in triangular facet geometry (height and slope) with fault slip rate. The landscape of the northeastern Basin and Range Province of the western United States provides an opportunity for addressing this problem. This is due to the presence of well developed triangular facets along active normal faults, as well as spatial variations in fault scale and slip rate. In addition, the Holocene climatic record for this region suggests a dominant tectonic regime, as the faulted landscape shows little evidence of precipitation gradients associated with tectonic uplift. Using GIS-based analyses of USGS 30 m digital elevation data (DEMs) for east - central Idaho and southwestern Montana, we analyze triangular facet geometries along fault systems of varying number of constituent segments. This approach allows us to link these geometries with established patterns of along - strike slip rate variation. For this study, we consider major watersheds to include only catchments with upstream and downstream boundaries extending from the drainage divide to the mapped fault trace, respectively. In order to maintain consistency in the selection criteria for the analyzed triangular facets, only facets bounded on opposite sides by major watersheds were considered. Our preliminary observations reflect a general along - strike increase in the surface area, average slope, and relief of triangular facets from the tips of the fault towards the center. We attribute anomalies in the along - strike geometric measurements of the triangular facets to represent possible locations of fault segment linkage associated with normal fault evolution.
NASA Astrophysics Data System (ADS)
Miehe, Christian; Mauthe, Steffen; Teichtmeister, Stephan
2015-09-01
This work develops new minimization and saddle point principles for the coupled problem of Darcy-Biot-type fluid transport in porous media at fracture. It shows that the quasi-static problem of elastically deforming, fluid-saturated porous media is related to a minimization principle for the evolution problem. This two-field principle determines the rate of deformation and the fluid mass flux vector. It provides a canonically compact model structure, where the stress equilibrium and the inverse Darcy's law appear as the Euler equations of a variational statement. A Legendre transformation of the dissipation potential relates the minimization principle to a characteristic three field saddle point principle, whose Euler equations determine the evolutions of deformation and fluid content as well as Darcy's law. A further geometric assumption results in modified variational principles for a simplified theory, where the fluid content is linked to the volumetric deformation. The existence of these variational principles underlines inherent symmetries of Darcy-Biot theories of porous media. This can be exploited in the numerical implementation by the construction of time- and space-discrete variational principles, which fully determine the update problems of typical time stepping schemes. Here, the proposed minimization principle for the coupled problem is advantageous with regard to a new unconstrained stable finite element design, while space discretizations of the saddle point principles are constrained by the LBB condition. The variational principles developed provide the most fundamental approach to the discretization of nonlinear fluid-structure interactions, showing symmetric systems in algebraic update procedures. They also provide an excellent starting point for extensions towards more complex problems. This is demonstrated by developing a minimization principle for a phase field description of fracture in fluid-saturated porous media. It is designed for an incorporation of alternative crack driving forces, such as a convenient criterion in terms of the effective stress. The proposed setting provides a modeling framework for the analysis of complex problems such as hydraulic fracture. This is demonstrated by a spectrum of model simulations.
Environmental and genetic determinants of innovativeness in a natural population of birds
Quinn, John L.; Cole, Ella F.; Reed, Thomas E.
2016-01-01
Much of the evidence for the idea that individuals differ in their propensity to innovate and solve new problems has come from studies on captive primates. Increasingly, behavioural ecologists are studying innovativeness in wild populations, and uncovering links with functional behaviour and fitness-related traits. The relative importance of genetic and environmental factors in driving this variation, however, remains unknown. Here, we present the results of the first large-scale study to examine a range of causal factors underlying innovative problem-solving performance (PSP) among 831 great tits (Parus major) temporarily taken into captivity. Analyses show that PSP in this population: (i) was linked to a variety of individual factors, including age, personality and natal origin (immigrant or local-born); (ii) was influenced by natal environment, because individuals had a lower PSP when born in poor-quality habitat, or where local population density was high, leading to cohort effects. Links with many of the individual and environmental factors were present only in some years. In addition, PSP (iii) had little or no measurable heritability, as estimated by a Bayesian animal model; and (iv) was not influenced by maternal effects. Despite previous reports of links between PSP and a range of functional traits in this population, the analyses here suggest that innovativeness had weak if any evolutionary potential. Instead most individual variation was caused by phenotypic plasticity driven by links with other behavioural traits and by environmentally mediated developmental stress. Heritability estimates are population, time and context specific, however, and more studies are needed to determine the generality of these effects. Our results shed light on the causes of innovativeness within populations, and add to the debate on the relative importance of genetic and environmental factors in driving phenotypic variation within populations. PMID:26926275
Limpert, Eckhard; Stahel, Werner A.
2011-01-01
Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325
Limpert, Eckhard; Stahel, Werner A
2011-01-01
The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
Configuration optimization of space structures
NASA Technical Reports Server (NTRS)
Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David
1991-01-01
The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.
On a comparison of two schemes in sequential data assimilation
NASA Astrophysics Data System (ADS)
Grishina, Anastasiia A.; Penenko, Alexey V.
2017-11-01
This paper is focused on variational data assimilation as an approach to mathematical modeling. Realization of the approach requires a sequence of connected inverse problems with different sets of observational data to be solved. Two variational data assimilation schemes, "implicit" and "explicit", are considered in the article. Their equivalence is shown and the numerical results are given on a basis of non-linear Robertson system. To avoid the "inverse problem crime" different schemes were used to produce synthetic measurement and to solve the data assimilation problem.
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Determining index of refraction from polarimetric hyperspectral radiance measurements
NASA Astrophysics Data System (ADS)
Martin, Jacob A.; Gross, Kevin C.
2015-09-01
Polarimetric hyperspectral imaging (P-HSI) combines two of the most common remote sensing modalities. This work leverages the combination of these techniques to improve material classification. Classifying and identifying materials requires parameters which are invariant to changing viewing conditions, and most often a material's reflectivity or emissivity is used. Measuring these most often requires assumptions be made about the material and atmospheric conditions. Combining both polarimetric and hyperspectral imaging, we propose a method to remotely estimate the index of refraction of a material. In general, this is an underdetermined problem because both the real and imaginary components of index of refraction are unknown at every spectral point. By modeling the spectral variation of the index of refraction using a few parameters, however, the problem can be made overdetermined. A number of different functions can be used to describe this spectral variation, and some are discussed here. Reducing the number of spectral parameters to fit allows us to add parameters which estimate atmospheric downwelling radiance and transmittance. Additionally, the object temperature is added as a fit parameter. The set of these parameters that best replicate the measured data is then found using a bounded Nelder-Mead simplex search algorithm. Other search algorithms are also examined and discussed. Results show that this technique has promise but also some limitations, which are the subject of ongoing work.
On Motion Planning and Control of Multi-Link Lightweight Robotic Manipulators
NASA Technical Reports Server (NTRS)
Cetinkunt, Sabri
1987-01-01
A general gross and fine motion planning and control strategy is needed for lightweight robotic manipulator applications such as painting, welding, material handling, surface finishing, and spacecraft servicing. The control problem of lightweight manipulators is to perform fast, accurate, and robust motions despite the payload variations, structural flexibility, and other environmental disturbances. Performance of the rigid manipulator model based computed torque and decoupled joint control methods are determined and simulated for the counterpart flexible manipulators. A counterpart flexible manipulator is defined as a manipulator which has structural flexibility, in addition to having the same inertial, geometric, and actuation properties of a given rigid manipulator. An adaptive model following control (AMFC) algorithm is developed to improve the performance in speed, accuracy, and robustness. It is found that the AMFC improves the speed performance by a factor of two over the conventional non-adaptive control methods for given accuracy requirements while proving to be more robust with respect to payload variations. Yet there are clear limitations on the performance of AMFC alone as well, which are imposed by the arm flexibility. In the search to further improve speed performance while providing a desired accuracy and robustness, a combined control strategy is developed. Furthermore, the problem of switching from one control structure to another during the motion and implementation aspects of combined control are discussed.
The inverse gravimetric problem in gravity modelling
NASA Technical Reports Server (NTRS)
Sanso, F.; Tscherning, C. C.
1989-01-01
One of the main purposes of geodesy is to determine the gravity field of the Earth in the space outside its physical surface. This purpose can be pursued without any particular knowledge of the internal density even if the exact shape of the physical surface of the Earth is not known, though this seems to entangle the two domains, as it was in the old Stoke's theory before the appearance of Molodensky's approach. Nevertheless, even when large, dense and homogeneous data sets are available, it was always recognized that subtracting from the gravity field the effect of the outer layer of the masses (topographic effect) yields a much smoother field. This is obviously more important when a sparse data set is bad so that any smoothing of the gravity field helps in interpolating between the data without raising the modeling error, this approach is generally followed because it has become very cheap in terms of computing time since the appearance of spectral techniques. The mathematical description of the Inverse Gravimetric Problem (IGP) is dominated mainly by two principles, which in loose terms can be formulated as follows: the knowledge of the external gravity field determines mainly the lateral variations of the density; and the deeper the density anomaly giving rise to a gravity anomaly, the more improperly posed is the problem of recovering the former from the latter. The statistical relation between rho and n (and its inverse) is also investigated in its general form, proving that degree cross-covariances have to be introduced to describe the behavior of rho. The problem of the simultaneous estimate of a spherical anomalous potential and of the external, topographic masses is addressed criticizing the choice of the mixed collection approach.
NASA Astrophysics Data System (ADS)
van Wesemael, Bas; Nocita, Marco
2016-04-01
One of the problems for mapping of soil organic carbon (SOC) at large-scale based on visible - near and short wave infrared (VIS-NIR-SWIR) remote sensing techniques is the spatial variation of topsoil moisture when the images are collected. Soil moisture is certainly an aspect causing biased SOC estimations, due to the problems in discriminating reflectance differences due to either variations in organic matter or soil moisture, or their combination. In addition, the difficult validation procedures make the accurate estimation of soil moisture from optical airborne a major challenge. After all, the first millimeters of the soil surface reflect the signal to the airborne sensor and show a large spatial, vertical and temporal variation in soil moisture. Hence, the difficulty of assessing the soil moisture of this thin layer at the same moment of the flight. The creation of a soil moisture proxy, directly retrievable from the hyperspectral data is a priority to improve the large-scale prediction of SOC. This paper aims to verify if the application of the normalized soil moisture index (NSMI) to Airborne Prima Experiment (APEX) hyperspectral images could improve the prediction of SOC. The study area was located in the loam region of Wallonia, Belgium. About 40 samples were collected from bare fields covered by the flight lines, and analyzed in the laboratory. Soil spectra, corresponding to the sample locations, were extracted from the images. Once the NSMI was calculated for the bare fields' pixels, spatial patterns, presumably related to within field soil moisture variations, were revealed. SOC prediction models, built using raw and pre-treated spectra, were generated from either the full dataset (general model), or pixels belonging to one of the two classes of NSMI values (NSMI models). The best result, with a RMSE after validation of 1.24 g C kg-1, was achieved with a NSMI model, compared to the best general model, characterized by a RMSE of 2.11 g C kg-1. These results confirmed the advantage to controlling the effect of soil moisture on the detection of SOC. The NSMI proved to be a flexible concept, due to the possible use of different SWIR wavelengths, and ease of use, because measurements of soil moisture by other techniques are not needed. However, in the future, it will be important to assess the effectiveness of the NSMI for different soil types, and other hyperspectral sensors.
NASA Astrophysics Data System (ADS)
Jia, Ningning; Y Lam, Edmund
2010-04-01
Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.
An experimental trip to the Calculus of Variations
NASA Astrophysics Data System (ADS)
Arroyo, Josu
2008-04-01
This paper presents a collection of experiments in the Calculus of Variations. The implementation of the Gradient Descent algorithm built on cubic-splines acting as "numerically friendly" elementary functions, give us ways to solve variational problems by constructing the solution. It wins a pragmatic point of view: one gets solutions sometimes as fast as possible, sometimes as close as possible to the true solutions. The balance speed/precision is not always easy to achieve. Starting from the most well-known, classic or historical formulation of a variational problem, section 2 describes briefly the bridge between theoretical and computational formulations. The next sections show the results of several kind of experiments; from the most basics, as those about geodesics, to the most complex, as those about vesicles.
Divergent conservation laws in hyperbolic thermoelasticity
NASA Astrophysics Data System (ADS)
Murashkin, E. V.; Radayev, Y. N.
2018-05-01
The present study is devoted to the problem of formulation of conservation laws in divergent form for hyperbolic thermoelastic continua. The field formalism is applied to study the problem. A natural density of thermoelastic action and the corresponding variational least action principle are formulated. A special form of the first variation of the action is employed to obtain 4-covariant divergent conservation laws. Differential field equations and constitutive laws are derived from a special form of the first variation of the action integral. The objectivity of constitutive equations is provided by the rotationally invariant forms of the Lagrangian employed.
The Prisoner Problem--A Generalization.
ERIC Educational Resources Information Center
Gannon, Gerald E.; Martelli, Mario U.
2000-01-01
Presents a generalization to the classic prisoner problem, which is inherently interesting and has a solution within the reach of most high school mathematics students. Suggests the problem as a way to emphasize to students the final step in a problem-solver's tool kit, considering possible generalizations when a particular problem has been…
Mughal, Arsalan Manzoor; Shaikh, Sirajul Haque
2018-01-01
Objective: Collaborative Problem Solving Empirical Progressions from the Assessment and Teaching of 21st Century Skills (ATC21S) framework were used to determine the level of collaborative problem solving skills (CPS) in first, second and third year MBBS students at Ziauddin College of Medicine during Problem-Based Learning (PBL) sessions. Variations based on gender and roles were studied. Methods: It is an analytical comparative cross-sectional study in which seven PBL groups were selected per year by non-probability convenient sampling. Data was collected using the Collaborative Problem Solving Five Strands Empirical Progressions by the primary investigator through observation of the students during PBL sessions. Duration of study was six months. Results: We found that in our students, development of social dimension skills is facilitated to a greater extent than the development of cognitive dimension skills through the process of PBL. These skills are generally better developed in the leader compared to the scribe and members in a group. They are also more developed in females compared to males. Modification in them is also observed as the year's progress. Conclusion: Although PBLs facilitate development of CPS skills' progression however in our curriculum, PBLs mainly focus on social skills development and have less emphasis on cognitive skill development. Thus, hybrid instructional strategies with components from TBL and mentorship are recommended for better development of CPS skills. PMID:29643904
Mughal, Arsalan Manzoor; Shaikh, Sirajul Haque
2018-01-01
Collaborative Problem Solving Empirical Progressions from the Assessment and Teaching of 21st Century Skills (ATC21S) framework were used to determine the level of collaborative problem solving skills (CPS) in first, second and third year MBBS students at Ziauddin College of Medicine during Problem-Based Learning (PBL) sessions. Variations based on gender and roles were studied. It is an analytical comparative cross-sectional study in which seven PBL groups were selected per year by non-probability convenient sampling. Data was collected using the Collaborative Problem Solving Five Strands Empirical Progressions by the primary investigator through observation of the students during PBL sessions. Duration of study was six months. We found that in our students, development of social dimension skills is facilitated to a greater extent than the development of cognitive dimension skills through the process of PBL. These skills are generally better developed in the leader compared to the scribe and members in a group. They are also more developed in females compared to males. Modification in them is also observed as the year's progress. Although PBLs facilitate development of CPS skills' progression however in our curriculum, PBLs mainly focus on social skills development and have less emphasis on cognitive skill development. Thus, hybrid instructional strategies with components from TBL and mentorship are recommended for better development of CPS skills.
NASA Astrophysics Data System (ADS)
Scholle, M.; Gaskell, P. H.; Marner, F.
2018-04-01
An exact first integral of the full, unsteady, incompressible Navier-Stokes equations is achieved in its most general form via the introduction of a tensor potential and parallels drawn with Maxwell's theory. Subsequent to this gauge freedoms are explored, showing that when used astutely they lead to a favourable reduction in the complexity of the associated equation set and number of unknowns, following which the inviscid limit case is discussed. Finally, it is shown how a change in gauge criteria enables a variational principle for steady viscous flow to be constructed having a self-adjoint form. Use of the new formulation is demonstrated, for different gauge variants of the first integral as the starting point, through the solution of a hierarchy of classical three-dimensional flow problems, two of which are tractable analytically, the third being solved numerically. In all cases the results obtained are found to be in excellent accord with corresponding solutions available in the open literature. Concurrently, the prescription of appropriate commonly occurring physical and necessary auxiliary boundary conditions, incorporating for completeness the derivation of a first integral of the dynamic boundary condition at a free surface, is established, together with how the general approach can be advantageously reformulated for application in solving unsteady flow problems with periodic boundaries.
Love, Alan C
2009-03-01
A central reason that undergirds the significance of evo-devo is the claim that development was left out of the Modern synthesis. This claim turns out to be quite complicated, both in terms of whether development was genuinely excluded and how to understand the different kinds of embryological research that might have contributed. The present paper reevaluates this central claim by focusing on the practice of model organism choice. Through a survey of examples utilized in the literature of the Modern synthesis, I identify a previously overlooked feature: exclusion of research on marine invertebrates. Understanding the import of this pattern requires interpreting it in terms of two epistemic values operating in biological research: theoretical generality and explanatory completeness. In tandem, these values clarify and enhance the significance of this exclusion. The absence of marine invertebrates implied both a lack of generality in the resulting theory and a lack of completeness with respect to particular evolutionary problems, such as evolvability and the origin of novelty. These problems were salient to embryological researchers aware of the variation and diversity of larval forms in marine invertebrates. In closing, I apply this analysis to model organism choice in evo-devo and discuss its relevance for an extended evolutionary synthesis.
Hierarchical spatial models of abundance and occurrence from imperfect survey data
Royle, J. Andrew; Kery, M.; Gautier, R.; Schmid, Hans
2007-01-01
Many estimation and inference problems arising from large-scale animal surveys are focused on developing an understanding of patterns in abundance or occurrence of a species based on spatially referenced count data. One fundamental challenge, then, is that it is generally not feasible to completely enumerate ('census') all individuals present in each sample unit. This observation bias may consist of several components, including spatial coverage bias (not all individuals in the Population are exposed to sampling) and detection bias (exposed individuals may go undetected). Thus, observations are biased for the state variable (abundance, occupancy) that is the object of inference. Moreover, data are often sparse for most observation locations, requiring consideration of methods for spatially aggregating or otherwise combining sparse data among sample units. The development of methods that unify spatial statistical models with models accommodating non-detection is necessary to resolve important spatial inference problems based on animal survey data. In this paper, we develop a novel hierarchical spatial model for estimation of abundance and occurrence from survey data wherein detection is imperfect. Our application is focused on spatial inference problems in the Swiss Survey of Common Breeding Birds. The observation model for the survey data is specified conditional on the unknown quadrat population size, N(s). We augment the observation model with a spatial process model for N(s), describing the spatial variation in abundance of the species. The model includes explicit sources of variation in habitat structure (forest, elevation) and latent variation in the form of a correlated spatial process. This provides a model-based framework for combining the spatially referenced samples while at the same time yielding a unified treatment of estimation problems involving both abundance and occurrence. We provide a Bayesian framework for analysis and prediction based on the integrated likelihood, and we use the model to obtain estimates of abundance and occurrence maps for the European Jay (Garrulus glandarius), a widespread, elusive, forest bird. The naive national abundance estimate ignoring imperfect detection and incomplete quadrat coverage was 77 766 territories. Accounting for imperfect detection added approximately 18 000 territories, and adjusting for coverage bias added another 131 000 territories to yield a fully corrected estimate of the national total of about 227 000 territories. This is approximately three times as high as previous estimates that assume every territory is detected in each quadrat.
NASA Astrophysics Data System (ADS)
Rolsma, Caleb
As a class of carbon-based nanomaterials, single-walled carbon nanotubes (SWNT) have many structural variations, called chiralities, each with different properties. Many potential applications of SWNT require the properties of a single chirality, but current synthesis methods can only produce single chiralities at prohibitive costs, or mixtures of chiralities at more affordable prices. Post-synthesis chirality separations provide a solution to this problem, and hydrogel separations are one such method. Despite much work in this field, the underlying interactions between SWNT and hydrogel are not fully understood. During separation, large quantities of SWNT are irretrievably lost due to irreversible adsorption to the hydrogel, posing a major problem to separation efficiency, while also offering an interesting scientific problem concerning the interaction of SWNT with hydrogels and surfactants. This thesis explores the problem of irreversible adsorption, offering an explanation for the process from a mechanistic viewpoint, opening new ways for improvement in separation. In brief, this work concludes adsorption follows three pathways, two of which lead to irreversible adsorption, both mediated by the presence of surfactants and limited by characteristics of the hydrogel surface. These findings stand to increase the general understanding of hydrogel SWNT separations, leading to improvements in separation, and bringing the research field closer to the many potential applications of single-chirality SWNT.
Kline, Kimberly N
2006-01-01
There is a burgeoning interest in the health and illness content of popular media in the domains of advertising, journalism, and entertainment. This article reviews the past 10 years of this research, describing the relationship between the health topics addressed in the research, the shifting focus of concerns about the media, and, ultimately, the variation in problems for health promotion. I suggest that research attending to topics related to bodily health challenges focused on whether popular media accurately or appropriately represented health challenges. The implication was that there is some consensus about more right or wrong, complete or incomplete ways of representing an issue; the problem was that the media are generally wrong. Alternatively, research addressing topics related to sociocultural context issues focused on how certain interests are privileged in the media. The implication was that competing groups are making claims on the system, but the problem was that popular media marginalizes certain interests. In short, popular media is not likely to facilitate understandings helpful to individuals coping with health challenges and is likely to perpetuate social and political power differentials with regard to health-related issues. I conclude by offering some possibilities for future health media content research.
Scientific data interpolation with low dimensional manifold model
Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...
2017-09-28
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
NASA Technical Reports Server (NTRS)
Meirovitch, L.; Bankovskis, J.
1969-01-01
The dynamic characteristics of two-stage slender elastic body were investigated. The first stage, containing a solid-fuel rocket, possesses variable mass while the second stage, envisioned as a flexible case, contains packaged instruments of constant mass. The mathematical formulation was in terms of vector equations of motion transformed by a variational principle into sets of scalar differential equations in terms of generalized coordinates. Solutions to the complete equations were obtained numerically by means of finite difference techniques. The problem has been programmed in the FORTRAN 4 language and solved on an IBM 360/50 computer. Results for limited cases are presented showing the nature of the solutions.
Demographic history, selection and functional diversity of the canine genome.
Ostrander, Elaine A; Wayne, Robert K; Freedman, Adam H; Davis, Brian W
2017-12-01
The domestic dog represents one of the most dramatic long-term evolutionary experiments undertaken by humans. From a large wolf-like progenitor, unparalleled diversity in phenotype and behaviour has developed in dogs, providing a model for understanding the developmental and genomic mechanisms of diversification. We discuss pattern and process in domestication, beginning with general findings about early domestication and problems in documenting selection at the genomic level. Furthermore, we summarize genotype-phenotype studies based first on single nucleotide polymorphism (SNP) genotyping and then with whole-genome data and show how an understanding of evolution informs topics as different as human history, adaptive and deleterious variation, morphological development, ageing, cancer and behaviour.
Perfect fluids in the Einstein-Cartan theory
NASA Technical Reports Server (NTRS)
Ray, J. R.; Smalley, L. J.
1982-01-01
It is pointed out that whereas most of the discussion of the Einstein-Cartan (EC) theory involves the relationship between gravitation and elementary particles, it is possible that the theory, if correct, may be important in certain extreme astrophysical and cosmological problems. The latter would include something like the collapse of a spinning star or an early universe with spin. A set of equations that describe a macroscopic perfect fluid in the EC theory is derived and examined. The equations are derived starting from the fundamental variational principle for a perfect fluid in general relativity. A brief review of the study by Ray (1972) is included, and the results for the EC theory are presented.
Automation of learning-set testing - The video-task paradigm
NASA Technical Reports Server (NTRS)
Washburn, David A.; Hopkins, William D.; Rumbaugh, Duane M.
1989-01-01
Researchers interested in studying discrimination learning in primates have typically utilized variations in the Wisconsin General Test Apparatus (WGTA). In the present experiment, a new testing apparatus for the study of primate learning is proposed. In the video-task paradigm, rhesus monkeys (Macaca mulatta) respond to computer-generated stimuli by manipulating a joystick. Using this apparatus, discrimination learning-set data for 2 monkeys were obtained. Performance on Trial 2 exceeded 80 percent within 200 discrimination learning problems. These data illustrate the utility of the video-task paradigm in comparative research. Additionally, the efficient learning and rich data that were characteristic of this study suggest several advantages of the present testing paradigm over traditional WGTA testing.
Scientific data interpolation with low dimensional manifold model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Wei; Wang, Bao; Barnard, Richard C.
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Problems of the theory of superconductivity which involve spatial inhomogeneity
NASA Astrophysics Data System (ADS)
Svidzinskii, A. V.
This book is concerned with questions which are related to equilibrium phenomena in superconductors, giving particular attention to effects determined by a spatial variation of the order parameter. The microscopic theory of superconductivity is developed on the basis of a model which takes into account the direct interaction between electrons. The theory of current relations in superconductors is discussed, taking into consideration the magnetic properties of superconductors in weak fields and the Meissner effect. Aspects regarding the general theory of tunneling are also explored, including the Josephson effect. An investigation is conducted of the theory of current conditions in areas in which the superconductor is in contact with normally conducting metal.
Bevaart, Floor; Mieloo, Cathelijne L; Jansen, Wilma; Raat, Hein; Donker, Marianne C H; Verhulst, Frank C; van Oort, Floor V A
2012-10-01
Problem perception and perceived need for professional care are important determinants that can contribute to ethnic differences in the use of mental health care. Therefore, we studied ethnic differences in problem perception and perceived need for professional care in the parents and teachers of 5- to 6-year-old children from the general population who were selected for having emotional and behavioural problems. A cross-sectional study with data of 10,951 children from grade two of the elementary schools in the Rotterdam-Rijnmond area, The Netherlands. Parents and teachers completed the strengths and difficulties questionnaire (SDQ) as well as questions on problem perception and perceived need for care. The SDQ was used to identify children with emotional and behavioural problems. We included Dutch, Surinamese, Antillean, Moroccan and Turkish children in our sample with high (>P90) SDQ scores (N = 1,215), who were not currently receiving professional care for their problems. Amongst children with high SDQ scores, problem perception was lower in non-Dutch parents than in Dutch parents (49% vs. 81%, p < 0.01). These lower rates of problem perception could not be explained by differences in socioeconomic position or severity of the problems. No ethnic differences were found in parental perceived need and in problem perception and perceived need reported by teachers. Higher levels of problem perception and perceived need were reported by teachers than by parents in all ethnic groups (PP: 87% vs. 63% and PN: 48% vs. 23%). Child health professionals should be aware of ethnic variations in problem perception as low problem perception in parents of non-Dutch children may lead to miscommunication and unmet need for professional care for the child. © 2012 The Authors. Journal of Child Psychology and Psychiatry © 2012 Association for Child and Adolescent Mental Health.
Variational data assimilation problem for the Baltic Sea thermodynamics
NASA Astrophysics Data System (ADS)
Zakharova, Natalia; Agoshkov, Valery; Parmuzin, Eugene
2015-04-01
The most versatile and promising technology for solving problems of monitoring and analysis of the natural environment is a four-dimensional variational data assimilation of observation data. In such problems not only the development and justification of algorithms for numerical solution of variational data assimilation problems but the properties of the optimal solution play an important role. In this work the variational data assimilation problems in the Baltic Sea water area were formulated and studied. Numerical experiments on restoring the ocean heat flux and obtaining solution of the system (temperature, salinity, velocity, and sea surface height) in the Baltic Sea primitive equation hydrodynamics model with assimilation procedure were carried out. In the calculations we used daily sea surface temperature observation from Danish meteorological Institute, prepared on the basis of measurements of the radiometer (AVHRR, AATSR and AMSRE) and spectroradiometer (SEVIRI and MODIS). The spatial resolution of the model grid with respect to the horizontal variables amounted to 0.0625x0.03125 degree. The results of the numerical experiments are presented. This study was supported by the Russian Foundation for Basic Research (project 13-01-00753, project 14-01-31195) and project 14-11-00609 by the Russian Science Foundation. References: 1 E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, 69-94 2 Zakharova N.B., Agoshkov V.I., Parmuzin E.I., The new method of ARGO buoys system observation data interpolation. Russian Journal of Numerical Analysis and Mathematical Modelling. Vol. 28, Issue 1, 2013. 3 Zalesny V.B., Gusev A.V., Chernobay S.Yu., Aps R., Tamsalu R., Kujala P., Rytkönen J. The Bal-tic Sea circulation modelling and assessment of marine pollution, Russ. J. Numer. Analysis and Math. Modelling, 2014, V 29, No. 2, pp. 129-138.
Resolving Rapid Variation in Energy for Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haut, Terry Scot; Ahrens, Cory Douglas; Jonko, Alexandra
2016-08-23
Resolving the rapid variation in energy in neutron and thermal radiation transport is needed for the predictive simulation capability in high-energy density physics applications. Energy variation is difficult to resolve due to rapid variations in cross sections and opacities caused by quantized energy levels in the nuclei and electron clouds. In recent work, we have developed a new technique to simultaneously capture slow and rapid variations in the opacities and the solution using homogenization theory, which is similar to multiband (MB) and to the finite-element with discontiguous support (FEDS) method, but does not require closure information. We demonstrated the accuracymore » and efficiency of the method for a variety of problems. We are researching how to extend the method to problems with multiple materials and the same material but with different temperatures and densities. In this highlight, we briefly describe homogenization theory and some results.« less
Existence of evolutionary variational solutions via the calculus of variations
NASA Astrophysics Data System (ADS)
Bögelein, Verena; Duzaar, Frank; Marcellini, Paolo
In this paper we introduce a purely variational approach to time dependent problems, yielding the existence of global parabolic minimizers, that is ∫0T ∫Ω [uṡ∂tφ+f(x,Du)] dx dt⩽∫0T ∫Ω f(x,Du+Dφ) dx dt, whenever T>0 and φ∈C0∞(Ω×(0,T),RN). For the integrand f:Ω×R→[0,∞] we merely assume convexity with respect to the gradient variable and coercivity. These evolutionary variational solutions are obtained as limits of maps depending on space and time minimizing certain convex variational functionals. In the simplest situation, with some growth conditions on f, the method provides the existence of global weak solutions to Cauchy-Dirichlet problems of parabolic systems of the type ∂tu-divDξf(x,Du)=0 in Ω×(0,∞).
Associations among types of impulsivity, substance use problems and neurexin-3 polymorphisms.
Stoltenberg, Scott F; Lehmann, Melissa K; Christ, Christa C; Hersrud, Samantha L; Davies, Gareth E
2011-12-15
Some of the genetic vulnerability for addiction may be mediated by impulsivity. This study investigated relationships among impulsivity, substance use problems and six neurexin-3 (NRXN3) polymorphisms. Neurexins (NRXNs) are presynaptic transmembrane proteins that play a role in the development and function of synapses. Impulsivity was assessed with the Barratt Impulsiveness Scale Version 11 (BIS-11), the Boredom Proneness Scale (BPS) and the TIME paradigm; alcohol problems with the Michigan Alcoholism Screening Test (MAST); drug problems with the Drug Abuse Screening Test (DAST-20); and regular tobacco use with a single question. Participants (n=439 Caucasians, 64.7% female) donated buccal cells for genotyping. Six NRXN3 polymorphisms were genotyped: rs983795, rs11624704, rs917906, rs1004212, rs10146997 and rs8019381. A dual luciferase assay was conducted to determine whether allelic variation at rs917906 regulated gene expression. In general, impulsivity was significantly higher in those who regularly used tobacco and/or had alcohol or drug problems. In men, there were modest associations between rs11624704 and attentional impulsivity (p=0.005) and between rs1004212 and alcohol problems (p=0.009). In women, there were weak associations between rs10146997 and TIME estimation (p=0.03); and between rs1004212 and drug problems (p=0.03). The dual luciferase assay indicated that C and T alleles of rs917906 did not differentially regulate gene expression in vitro. Associations between impulsivity, substance use problems and polymorphisms in NRXN3 may be gender specific. Impulsivity is associated with substance use problems and may provide a useful intermediate phenotype for addiction. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Dynamic contact problem with adhesion and damage between thermo-electro-elasto-viscoplastic bodies
NASA Astrophysics Data System (ADS)
Hadj ammar, Tedjani; Saïdi, Abdelkader; Azeb Ahmed, Abdelaziz
2017-05-01
We study of a dynamic contact problem between two thermo-electro-elasto-viscoplastic bodies with damage and adhesion. The contact is frictionless and is modeled with normal compliance condition. We derive variational formulation for the model and prove an existence and uniqueness result of the weak solution. The proof is based on arguments of evolutionary variational inequalities, parabolic inequalities, differential equations, and fixed point theorem.
ERIC Educational Resources Information Center
Orta Amaro, José Antonio; Sánchez Sánchez, Ernesto A.; Ramírez-Esperón, María Eugenia
2017-01-01
The aim of this investigation is to explore the preservice teachers' reasoning about variation (variability or spread) when they analyze data in situations that involve risk. In particular, in this communication the responses to two problems of a questionnaire administered to 96 preservice teachers are reported. The problems are of comparing…
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
Polygenic Risk, Personality Dimensions, and Adolescent Alcohol Use Problems: A Longitudinal Study
Li, James J.; Savage, Jeanne E.; Kendler, Kenneth S.; Hickman, Matthew; Mahedy, Liam; Macleod, John; Kaprio, Jaakko; Rose, Richard J.; Dick, Danielle M.
2017-01-01
Objective: Alcohol use problems are common during adolescence and can predict serious negative outcomes in adulthood, including substance dependence and psychopathology. The current study examines the notion that alcohol use problems are driven by polygenic influences and that genetic influences may indirectly affect alcohol use problems through multiple pathways of risk, including variations in personality. Method: We used a genome-wide approach to examine associations between genetic risk for alcohol use problems, personality dimensions, and adolescent alcohol use problems in two separate longitudinal population-based samples, the Finnish Twin Cohort (FinnTwin12) and the Avon Longitudinal Study of Parents and Children (ALSPAC). Participants were 1,035 young adults from FinnTwin12 and 3,160 adolescents from ALSPAC. Polygenic risk scores (PRS) were calculated for ALSPAC using genome-wide association results (on alcohol dependence symptoms as defined by the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition) from FinnTwin12. A parallel multiple mediator model was tested to examine whether the association between PRS and alcohol use problems assessed at age 16 could be explained by variations in personality dimensions assessed at age 13, including sensation seeking and negative emotionality. Results: PRS were marginally predictive of age 16 alcohol use problems; this association was partially mediated by sensation seeking. Polygenic variation underlying risk for alcohol use problems may directly influence the effects of sensation seeking, which in turn influence the development of alcohol use problems in later adolescence. Conclusions: These findings contribute to the increasing evidence regarding the salience of sensation seeking during early adolescence as a potential constituent in the risk pathway underlying the development of alcohol use problems. PMID:28499112
Singular Optimal Controls of Rocket Motion (Survey)
NASA Astrophysics Data System (ADS)
Kiforenko, B. N.
2017-05-01
Survey of modern state and discussion of problems of the perfection of methods of investigation of variational problems with a focus on mechanics of space flight are presented. The main attention is paid to the enhancement of the methods of solving of variational problems of rocket motion in the gravitational fields, including rocket motion in the atmosphere. These problems are directly connected with the permanently actual problem of the practical astronautics to increase the payload that is orbited by the carrier rockets in the circumplanetary orbits. An analysis of modern approaches to solving the problems of control of rockets and spacecraft motion on the trajectories with singular arcs that are optimal for the motion of the variable mass body in the medium with resistance is given. The presented results for some maneuvers can serve as an information source for decision making on designing promising rocket and space technology
NASA Technical Reports Server (NTRS)
Garcia, F., Jr.
1975-01-01
This paper presents a solution to a complex lifting reentry three-degree-of-freedom problem by using the calculus of variations to minimize the integral of the sum of the aerodynamics loads and heat rate input to the vehicle. The entry problem considered does not have state and/or control constraints along the trajectory. The calculus of variations method applied to this problem gives rise to a set of necessary conditions which are used to formulate a two point boundary value (TPBV) problem. This TPBV problem is then numerically solved by an improved method of perturbation functions (IMPF) using several starting co-state vectors. These vectors were chosen so that each one had a larger norm with respect to show how the envelope of convergence is significantly increased using this method and cases are presented to point this out.
NASA Astrophysics Data System (ADS)
Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.
2018-04-01
In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm
Pre-vector variational inequality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lai-Jiu
1994-12-31
Let X be a Hausdorff topological vector space, (Y, D) be an ordered Hausdorff topological vector space ordered by convex cone D. Let L(X, Y) be the space of all bounded linear operator, E {improper_subset} X be a nonempty set, T : E {yields} L(X, Y), {eta} : E {times} E {yields} E be functions. For x, y {element_of} Y, we denote x {not_lt} y if y - x intD, where intD is the interior of D. We consider the following two problems: Find x {element_of} E such that < T(x), {eta}(y, x) > {not_lt} 0 for all y {element_of}more » E and find x {element_of} E, < T(x), {eta}(y, x) > {not_gt} 0 for all y {element_of} E and < T(x), {eta}(y, x) >{element_of} C{sub p}{sup w+} = {l_brace} {element_of} L(X, Y) {vert_bar}< l, {eta}(x, 0) >{not_lt} 0 for all x {element_of} E{r_brace} where < T(x), y > denotes linear operator T(x) at y, that is T(x), (y). We called Pre-VVIP the Pre-vector variational inequality problem and Pre-VCP complementary problem. If X = R{sup n}, Y = R, D = R{sub +} {eta}(y, x) = y - x, then our problem is the well-known variational inequality first studies by Hartman and Stampacchia. If Y = R, D = R{sub +}, {eta}(y, x) = y - x, our problem is the variational problem in infinite dimensional space. In this research, we impose different condition on T(x), {eta}, X, and < T(x), {eta}(y, x) > and investigate the existences theorem of these problems. As an application of one of our results, we establish the existence theorem of weak minimum of the problem. (P) V - min f(x) subject to x {element_of} E where f : X {yields} Y si a Frechet differentiable invex function.« less
Adsorption of asymmetric rigid rods or heteronuclear diatomic moleculeson homogeneous surfaces
NASA Astrophysics Data System (ADS)
Engl, W.; Courbin, L.; Panizza, P.
2004-10-01
We treat the adsorption on homogeneous surfaces of asymmetric rigid rods (like for instance heteronuclear diatomic molecules). We show that the n→0 vector spin formalism is well suited to describe such a problem. We establish an isomorphism between the coupling constants of the magnetic Hamiltonian and the adsorption parameters of the rigid rods. By solving this Hamiltonian within a mean-field approximation, we obtain analytical expressions for the densities of the different rod’s configurations, both isotherm and isobar adsorptions curves. The most probable configurations of the molecules (normal or parallel to the surface) which depends on temperature and energy parameters are summarized in a diagram. We derive that the variation of Qv , the heat of adsorption at constant volume, with the temperature is a direct signature of the adsorbed molecules configuration change. We show that this formalism can be generalized to more complicated problems such as for instance the adsorption of symmetric and asymmetric rigid rods mixtures in the presence or not of interactions.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.
Why "improved" water sources are not always safe.
Shaheed, Ameer; Orgill, Jennifer; Montgomery, Maggie A; Jeuland, Marc A; Brown, Joe
2014-04-01
Existing and proposed metrics for household drinking-water services are intended to measure the availability, safety and accessibility of water sources. However, these attributes can be highly variable over time and space and this variation complicates the task of creating and implementing simple and scalable metrics. In this paper, we highlight those factors - especially those that relate to so-called improved water sources - that contribute to variability in water safety but may not be generally recognized as important by non-experts. Problems in the provision of water in adequate quantities and of adequate quality - interrelated problems that are often influenced by human behaviour - may contribute to an increased risk of poor health. Such risk may be masked by global water metrics that indicate that we are on the way to meeting the world's drinking-water needs. Given the complexity of the topic and current knowledge gaps, international metrics for access to drinking water should be interpreted with great caution. We need further targeted research on the health impacts associated with improvements in drinking-water supplies.
McKay, J R; Weiss, R V
2001-04-01
This article is an initial report from a review of alcohol and drug treatment studies with follow-ups of 2 years or more. The goals of the review are to examine the stability of substance use outcomes and the factors that moderate or mediate these outcomes. Results from 12 studies that generated multiple research reports are presented, and methodological problems encountered in the review are discussed. Substance use outcomes at the group level were generally stable, although moderate within-subject variation in substance use status over time was observed. Of factors assessed at baseline, psychiatric severity was a significant predictor of outcome in the highest percentage of reports, although the nature of the relationship varied. Stronger motivation and coping at baseline also consistently predicted better drinking outcomes. Better progress while in treatment, and the performance of pro-recovery behaviors and low problem severity in associated areas following treatment, consistently predicted better substance use outcomes.
NonBoussinesq effects on vorticity and kinetic energy production
NASA Astrophysics Data System (ADS)
Ravichandran, S.; Dixit, Harish; Govindarajan, Rama
2015-11-01
The Boussinesq approximation, commonly employed in weakly compressible or incompressible flows, neglects changes in inertia due to changes in the density. However, the nonBoussinesq terms can lead to a kind of centrifugal instability for small but sharp density variations, and therefore cannot be neglected under such circumstances (see, e.g.,
Crack problem in superconducting cylinder with exponential distribution of critical-current density
NASA Astrophysics Data System (ADS)
Zhao, Yufeng; Xu, Chi; Shi, Liang
2018-04-01
The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.
Lewer, Dan; O'Reilly, Claire; Mojtabai, Ramin; Evans-Lacko, Sara
2015-09-01
Prescribing of antidepressants varies widely between European countries despite no evidence of difference in the prevalence of affective disorders. To investigate associations between the use of antidepressants, country-level spending on healthcare and country-level attitudes towards mental health problems. We used Eurobarometer 2010, a large general population survey from 27 European countries, to measure antidepressant use and regularity of use. We then analysed the associations with country-level spending on healthcare and country-level attitudes towards mental health problems. Higher country spending on healthcare was strongly associated with regular use of antidepressants. Beliefs that mentally ill people are 'dangerous' were associated with higher use, and beliefs that they 'never recover' or 'have themselves to blame' were associated with lower and less regular use of antidepressants. Contextual factors, such as healthcare spending and public attitudes towards mental illness, may partly explain variations in antidepressant use and regular use of these medications. © The Royal College of Psychiatrists 2015.
The Value of Information for Populations in Varying Environments
NASA Astrophysics Data System (ADS)
Rivoire, Olivier; Leibler, Stanislas
2011-04-01
The notion of information pervades informal descriptions of biological systems, but formal treatments face the problem of defining a quantitative measure of information rooted in a concept of fitness, which is itself an elusive notion. Here, we present a model of population dynamics where this problem is amenable to a mathematical analysis. In the limit where any information about future environmental variations is common to the members of the population, our model is equivalent to known models of financial investment. In this case, the population can be interpreted as a portfolio of financial assets and previous analyses have shown that a key quantity of Shannon's communication theory, the mutual information, sets a fundamental limit on the value of information. We show that this bound can be violated when accounting for features that are irrelevant in finance but inherent to biological systems, such as the stochasticity present at the individual level. This leads us to generalize the measures of uncertainty and information usually encountered in information theory.
Evaluating the care of general medicine inpatients: how good is implicit review?
Hayward, R A; McMahon, L F; Bernard, A M
1993-04-01
Peer review often consists of implicit evaluations by physician reviewers of the quality and appropriateness of care. This study evaluated the ability of implicit review to measure reliably various aspects of care on a general medicine inpatient service. Retrospective review of patients' charts, using structured implicit review, of a stratified random sample of consecutive admissions to a general medicine ward. A university teaching hospital. Twelve internists were trained in structured implicit review and reviewed 675 patient admissions (with 20% duplicate reviews for a total of 846 reviews). Although inter-rater reliabilities for assessments of overall quality of care and preventable deaths (kappa = 0.5) were adequate for aggregate comparisons (for example, comparing mean ratings on two hospital wards), they were inadequate for reliable evaluations of single patients using one or two reviewers. Reviewers' agreement about most focused quality problems (for example, timeliness of diagnostic evaluation and clinical readiness at time of discharge) and about the appropriateness of hospital ancillary resource use was poor (kappa < or = 0.2). For most focused implicit measures, bias due to specific reviewers who were systematically more harsh or lenient (particularly for evaluation of resource-use appropriateness) accounted for much of the variation in reviewers' assessments, but this was not a substantial problem for the measure of overall quality. Reviewers rarely reported being unable to evaluate the quality of care because of deficiencies in documentation in the patient's chart. For assessment of overall quality and preventable deaths of general medicine inpatients, implicit review by peers had moderate degrees of reliability, but for most other specific aspects of care, physician reviewers could not agree. Implicit review was particularly unreliable at evaluating the appropriateness of hospital resource use and the patient's readiness for discharge, two areas where this type of review is often used.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Detecting spatio-temporal modes in multivariate data by entropy field decomposition
NASA Astrophysics Data System (ADS)
Frank, Lawrence R.; Galinsky, Vitaly L.
2016-09-01
A new data analysis method that addresses a general problem of detecting spatio-temporal variations in multivariate data is presented. The method utilizes two recent and complimentary general approaches to data analysis, information field theory (IFT) and entropy spectrum pathways (ESPs). Both methods reformulate and incorporate Bayesian theory, thus use prior information to uncover underlying structure of the unknown signal. Unification of ESP and IFT creates an approach that is non-Gaussian and nonlinear by construction and is found to produce unique spatio-temporal modes of signal behavior that can be ranked according to their significance, from which space-time trajectories of parameter variations can be constructed and quantified. Two brief examples of real world applications of the theory to the analysis of data bearing completely different, unrelated nature, lacking any underlying similarity, are also presented. The first example provides an analysis of resting state functional magnetic resonance imaging data that allowed us to create an efficient and accurate computational method for assessing and categorizing brain activity. The second example demonstrates the potential of the method in the application to the analysis of a strong atmospheric storm circulation system during the complicated stage of tornado development and formation using data recorded by a mobile Doppler radar. Reference implementation of the method will be made available as a part of the QUEST toolkit that is currently under development at the Center for Scientific Computation in Imaging.
Anatomy and Aesthetics of the Labia Minora: The Ideal Vulva?
Clerico, C; Lari, A; Mojallal, A; Boucher, F
2017-06-01
Female genital cosmetic surgery is becoming more and more widespread both in the field of plastic and gynaecological surgery. The increased demand for vulvar surgery is spurred by the belief that the vulva is abnormal in appearance. What is normal in terms of labial anatomy? Labia minora enlargement or hypertrophy remains a clinical diagnosis which is poorly defined as it could be considered a variation of the normal anatomy. Enlarged labia minora can cause functional, aesthetic and psychosocial problems. In reality, given the wide variety of vulvar morphology among people, it is a very subjective issue to define the "normal" vulva. The spread of nudity in the general media plays a major role in creating an artificial image and standards with regard to the ideal form. Physicians should be aware that the patient's self-perception of the normal or ideal vulva is highly influenced by the arguably distorted image related to our socio-psychological environment, as presented to us by the general media and internet. As physicians, we have to educate our patients on the variation of vulvar anatomy and the potential risks of these surgeries. Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these evidence-based medicine ratings, please refer to Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Martin-Storey, Alexa
2015-01-01
Dating violence during adolescence negatively influences concurrent psychosocial functioning, and has been linked with an increased likelihood of later intimate partner violence. Identifying who is most vulnerable for this negative outcome can inform the development of intervention practices addressing this problem. The two goals of this study were to assess variations in the prevalence of dating violence across different measures of sexual minority status (e.g., sexual minority identity or same-sex sexual behavior), and to assess whether this association was mediated by bullying, the number of sexual partners, binge drinking or aggressive behaviors. These goals were assessed by employing the Massachusetts Youth Risk Behavior Survey (N = 12,984), a regionally representative sample of youth ages 14-18. In this sample, a total of 540 girls and 323 boys reported a non-heterosexual identity, and 429 girls and 230 boys reported having had one or more same-sex sexual partners. The results generally supported a higher prevalence of dating violence among sexual minority youth. This vulnerability varied considerably across gender, sexual minority identity and the gender of sexual partners, but generally persisted when accounting for the mediating variables. The findings support investigating dating violence as a mechanism in the disparities between sexual minority and heterosexual youth, and the importance of addressing sexual minority youth specifically in interventions targeting dating violence.
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
Application of perturbation theory to lattice calculations based on method of cyclic characteristics
NASA Astrophysics Data System (ADS)
Assawaroongruengchot, Monchai
Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR-BOC, CVR-EOC and keff-EOC adjustment of a CANDU lattice of which the burnup period is extended from 300 to 450 FPDs. The cases with the central pin containing either Dysprosium or Gadolinium in the natural Uranium are considered in our study. (Abstract shortened by UMI.)
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
Honnavar, Gajanan V; Ramesh, K P; Bhat, S V
2014-01-23
The mixed alkali metal effect is a long-standing problem in glasses. Electron paramagnetic resonance (EPR) is used by several researchers to study the mixed alkali metal effect, but a detailed analysis of the nearest neighbor environment of the glass former using spin-Hamiltonian parameters was elusive. In this study we have prepared a series of vanadate glasses having general formula (mol %) 40 V2O5-30BaF2-(30 - x)LiF-xRbF with x = 5, 10, 15, 20, 25, and 30. Spin-Hamiltonian parameters of V(4+) ions were extracted by simulating and fitting to the experimental spectra using EasySpin. From the analysis of these parameters it is observed that the replacement of lithium ions by rubidium ions follows a "preferential substitution model". Using this proposed model, we were able to account for the observed variation in the ratio of the g parameter, which goes through a maximum. This reflects an asymmetric to symmetric changeover of the alkali metal ion environment around the vanadium site. Further, this model also accounts for the variation in oxidation state of vanadium ion, which was confirmed from the variation in signal intensity of EPR spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Coq, Johanne; Ghosh, Partho
2012-06-19
Anticipatory ligand binding through massive protein sequence variation is rare in biological systems, having been observed only in the vertebrate adaptive immune response and in a phage diversity-generating retroelement (DGR). Earlier work has demonstrated that the prototypical DGR variable protein, major tropism determinant (Mtd), meets the demands of anticipatory ligand binding by novel means through the C-type lectin (CLec) fold. However, because of the low sequence identity among DGR variable proteins, it has remained unclear whether the CLec fold is a general solution for DGRs. We have addressed this problem by determining the structure of a second DGR variable protein,more » TvpA, from the pathogenic oral spirochete Treponema denticola. Despite its weak sequence identity to Mtd ({approx}16%), TvpA was found to also have a CLec fold, with predicted variable residues exposed in a ligand-binding site. However, this site in TvpA was markedly more variable than the one in Mtd, reflecting the unprecedented approximate 10{sup 20} potential variability of TvpA. In addition, similarity between TvpA and Mtd with formylglycine-generating enzymes was detected. These results provide strong evidence for the conservation of the formylglycine-generating enzyme-type CLec fold among DGRs as a means of accommodating massive sequence variation.« less
Effect of Teosinte Cytoplasmic Genomes on Maize Phenotype
Allen, James O.
2005-01-01
Determining the contribution of organelle genes to plant phenotype is hampered by several factors, including the paucity of variation in the plastid and mitochondrial genomes. To circumvent this problem, evolutionary divergence between maize (Zea mays ssp. mays) and the teosintes, its closest relatives, was utilized as a source of cytoplasmic genetic variation. Maize lines in which the maize organelle genomes were replaced through serial backcrossing by those representing the entire genus, yielding alloplasmic sublines, or cytolines were created. To avoid the confounding effects of segregating nuclear alleles, an inbred maize line was utilized. Cytolines with Z. mays teosinte cytoplasms were generally indistinguishable from maize. However, cytolines with cytoplasm from the more distantly related Z. luxurians, Z. diploperennis, or Z. perennis exhibited a plethora of differences in growth, development, morphology, and function. Significant differences were observed for 56 of the 58 characters studied. Each cytoline was significantly different from the inbred line for most characters. For a given character, variation was often greater among cytolines having cytoplasms from the same species than among those from different species. The characters differed largely independently of each other. These results suggest that the cytoplasm contributes significantly to a large proportion of plant traits and that many of the organelle genes are phenotypically important. PMID:15731518
Le Coq, Johanne; Ghosh, Partho
2011-01-01
Anticipatory ligand binding through massive protein sequence variation is rare in biological systems, having been observed only in the vertebrate adaptive immune response and in a phage diversity-generating retroelement (DGR). Earlier work has demonstrated that the prototypical DGR variable protein, major tropism determinant (Mtd), meets the demands of anticipatory ligand binding by novel means through the C-type lectin (CLec) fold. However, because of the low sequence identity among DGR variable proteins, it has remained unclear whether the CLec fold is a general solution for DGRs. We have addressed this problem by determining the structure of a second DGR variable protein, TvpA, from the pathogenic oral spirochete Treponema denticola. Despite its weak sequence identity to Mtd (∼16%), TvpA was found to also have a CLec fold, with predicted variable residues exposed in a ligand-binding site. However, this site in TvpA was markedly more variable than the one in Mtd, reflecting the unprecedented approximate 1020 potential variability of TvpA. In addition, similarity between TvpA and Mtd with formylglycine-generating enzymes was detected. These results provide strong evidence for the conservation of the formylglycine-generating enzyme-type CLec fold among DGRs as a means of accommodating massive sequence variation. PMID:21873231
Le Coq, Johanne; Ghosh, Partho
2011-08-30
Anticipatory ligand binding through massive protein sequence variation is rare in biological systems, having been observed only in the vertebrate adaptive immune response and in a phage diversity-generating retroelement (DGR). Earlier work has demonstrated that the prototypical DGR variable protein, major tropism determinant (Mtd), meets the demands of anticipatory ligand binding by novel means through the C-type lectin (CLec) fold. However, because of the low sequence identity among DGR variable proteins, it has remained unclear whether the CLec fold is a general solution for DGRs. We have addressed this problem by determining the structure of a second DGR variable protein, TvpA, from the pathogenic oral spirochete Treponema denticola. Despite its weak sequence identity to Mtd (∼16%), TvpA was found to also have a CLec fold, with predicted variable residues exposed in a ligand-binding site. However, this site in TvpA was markedly more variable than the one in Mtd, reflecting the unprecedented approximate 10(20) potential variability of TvpA. In addition, similarity between TvpA and Mtd with formylglycine-generating enzymes was detected. These results provide strong evidence for the conservation of the formylglycine-generating enzyme-type CLec fold among DGRs as a means of accommodating massive sequence variation.
Reduction technique for tire contact problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1995-01-01
A reduction technique and a computational procedure are presented for predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of the reduction technique, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface.
Using Problem Fields as a Method of Change.
ERIC Educational Resources Information Center
Pehkonen, Erkki
1992-01-01
Discusses the rationale and use of problem fields which are sets of related and/or connected open-ended problem-solving tasks within mathematics instruction. Polygons with matchsticks and the number triangle are two examples of problem fields presented along with variations in conditions that promote other matchstick puzzles. (11 references) (JJK)
Analytical Solutions for Rumor Spreading Dynamical Model in a Social Network
NASA Astrophysics Data System (ADS)
Fallahpour, R.; Chakouvari, S.; Askari, H.
2015-03-01
In this paper, Laplace Adomian decomposition method is utilized for evaluating of spreading model of rumor. Firstly, a succinct review is constructed on the subject of using analytical methods such as Adomian decomposion method, Variational iteration method and Homotopy Analysis method for epidemic models and biomathematics. In continue a spreading model of rumor with consideration of forgetting mechanism is assumed and subsequently LADM is exerted for solving of it. By means of the aforementioned method, a general solution is achieved for this problem which can be readily employed for assessing of rumor model without exerting any computer program. In addition, obtained consequences for this problem are discussed for different cases and parameters. Furthermore, it is shown the method is so straightforward and fruitful for analyzing equations which have complicated terms same as rumor model. By employing numerical methods, it is revealed LADM is so powerful and accurate for eliciting solutions of this model. Eventually, it is concluded that this method is so appropriate for this problem and it can provide researchers a very powerful vehicle for scrutinizing rumor models in diverse kinds of social networks such as Facebook, YouTube, Flickr, LinkedIn and Tuitor.
Modeling aspen and red pine shoot growth to daily weather variations.
Donald A. Perala
1983-01-01
Quantifies daily shoot growth of quaking aspen and red pine in response to daily variation in air temperature, soil moisture, solar radiation, evapotranspiration, and inherent seasonal plant growth rhythm. Discusses potential application of shoot growth equations to silvicultural problems related to microclimatic variation. Identifies limitations and areas for...
USDA-ARS?s Scientific Manuscript database
Different individuals of the same species are generally thought to have very similar genomes. However, there is growing evidence that structural variation in the form of copy number variation (CNV) and presence-absence variation (PAV) can lead to variation in the genome content of individuals withi...
Ball, Gregory F; Balthazart, Jacques
2008-05-12
Investigations of the cellular and molecular mechanisms of physiology and behaviour have generally avoided attempts to explain individual differences. The goal has rather been to discover general processes. However, understanding the causes of individual variation in many phenomena of interest to avian eco-physiologists will require a consideration of such mechanisms. For example, in birds, changes in plasma concentrations of steroid hormones are important in the activation of social behaviours related to reproduction and aggression. Attempts to explain individual variation in these behaviours as a function of variation in plasma hormone concentrations have generally failed. Cellular variables related to the effectiveness of steroid hormone have been useful in some cases. Steroid hormone target sensitivity can be affected by variables such as metabolizing enzyme activity, hormone receptor expression as well as receptor cofactor expression. At present, no general theory has emerged that might provide a clear guidance when trying to explain individual variability in birds or in any other group of vertebrates. One strategy is to learn from studies of large units of intraspecific variation such as population or sex differences to provide ideas about variables that might be important in explaining individual variation. This approach along with the use of newly developed molecular genetic tools represents a promising avenue for avian eco-physiologists to pursue.
A variational technique for smoothing flight-test and accident data
NASA Technical Reports Server (NTRS)
Bach, R. E., Jr.
1980-01-01
The problem of determining aircraft motions along a trajectory is solved using a variational algorithm that generates unmeasured states and forcing functions, and estimates instrument bias and scale-factor errors. The problem is formulated as a nonlinear fixed-interval smoothing problem, and is solved as a sequence of linear two-point boundary value problems, using a sweep method. The algorithm has been implemented for use in flight-test and accident analysis. Aircraft motions are assumed to be governed by a six-degree-of-freedom kinematic model; forcing functions consist of body accelerations and winds, and the measurement model includes aerodynamic and radar data. Examples of the determination of aircraft motions from typical flight-test and accident data are presented.
Pansharpening via coupled triple factorization dictionary learning
Skau, Erik; Wohlberg, Brendt; Krim, Hamid; ...
2016-03-01
Data fusion is the operation of integrating data from different modalities to construct a single consistent representation. Here, this paper proposes variations of coupled dictionary learning through an additional factorization. One variation of this model is applicable to the pansharpening data fusion problem. Real world pansharpening data was applied to train and test our proposed formulation. The results demonstrate that the data fusion model can successfully be applied to the pan-sharpening problem.
Variations in Student Mental Health and Treatment Utilization Across US Colleges and Universities.
Ketchen Lipson, Sarah; Gaddis, S Michael; Heinze, Justin; Beck, Kathryn; Eisenberg, Daniel
2015-01-01
On US college campuses, mental health problems are highly prevalent, appear to be increasing, and are often untreated. Concerns about student mental health are well documented, but little is known about potential variations across the diversity of institutions of higher education. Participants were 43,210 undergraduates at 72 campuses that participated in the Healthy Minds Study from 2007 to 2013. Multivariable logistic regressions focus on associations between institutional characteristics and student mental health and treatment utilization. The following institutional characteristics are associated with worse mental health: doctoral-granting, public, large enrollment, nonresidential, less competitive, and lower graduation rates. Among students with apparent mental health problems, treatment utilization is higher at doctorate-granting institutions, baccalaureate colleges, institutions with small enrollments, and schools with strong residential systems. Although high rates of mental health problems and low treatment utilization are major concerns at all types of institutions of higher education, substantial variation occurs across campuses.
A variationally coupled FE-BE method for elasticity and fracture mechanics
NASA Technical Reports Server (NTRS)
Lu, Y. Y.; Belytschko, T.; Liu, W. K.
1991-01-01
A new method for coupling finite element and boundary element subdomains in elasticity and fracture mechanics problems is described. The essential feature of this new method is that a single variational statement is obtained for the entire domain, and in this process the terms associated with tractions on the interfaces between the subdomains are eliminated. This provides the additional advantage that the ambiguities associated with the matching of discontinuous tractions are circumvented. The method leads to a direct procedure for obtaining the discrete equations for the coupled problem without any intermediate steps. In order to evaluate this method and compare it with previous methods, a patch test for coupled procedures has been devised. Evaluation of this variationally coupled method and other methods, such as stiffness coupling and constraint traction matching coupling, shows that this method is substantially superior. Solutions for a series of fracture mechanics problems are also reported to illustrate the effectiveness of this method.
Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.
Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin
2016-10-10
We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.
Information processing of motion in facial expression and the geometry of dynamical systems
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.
2005-01-01
An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.
Adaptive regularization of the NL-means: application to image and video denoising.
Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François
2014-08-01
Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.
Cointry, G R; Ferretti, J L; Reina, P S; Nocciolino, L M; Rittweger, J; Capozza, R F
2014-03-01
The pQCT-assessed Bone Strength Indices (BSI's, SSI) depend on the product of a 'quality' indicator, the cortical vBMD (vCtD), and a 'design' indicator, one of the cross-sectional moments of inertia or related variables (MIs) in long bones. As the MIs vary naturally much more than the vCtD and represent different properties, it could be that the variation of the indices might not reflect the relative mechanical impact of the variation of their determinant factors in different individuals or circumstances. To understand this problem, we determined the vCtD and MI's in tibia scans of 232 healthy men and pre- and post-MP women, expressed in SD of the means calculated for each group, and analyzed the independent influence of 1 SD unit of variation of each factor on that of the indices by multiple correlations. Results showed: 1. that the independent influence of the MIs on the indices was generally larger than that of the vCtD, and 2. that in post-MP women the influence of the vCtD was larger than it was in the other groups. This confirms the view that inter-individual variation of vCtD is comparatively small, and that mechanical competence of human bone is mostly determined by 'design' factors.
Data-Driven Online and Real-Time Combinatorial Optimization
2013-10-30
Problem , the online Traveling Salesman Problem , and variations of the online Quota Hamil- tonian Path Problem and the online Traveling ...has the lowest competitive ratio among all algorithms of this kind. Second, we consider the Online Traveling Salesman Problem , and consider randomized...matroid secretary problem on a partition matroid. 6. Jaillet, P. and X. Lu. “Online Traveling Salesman Problems with Rejection Options”, submitted
Stranges, Saverio; Tigbe, William; Gómez-Olivé, Francesc Xavier; Thorogood, Margaret; Kandala, Ngianga-Bakwin
2012-08-01
To estimate the prevalence of sleep problems and the effect of potential correlates in low-income settings from Africa and Asia, where the evidence is lacking. Cross-sectional. Community-wide samples from 8 countries across Africa and Asia participating in the INDEPTH WHO-SAGE multicenter collaboration during 2006-2007. The participating sites included rural populations in Ghana, Tanzania, South Africa, India, Bangladesh, Vietnam, and Indonesia, and an urban area in Kenya. There were 24,434 women and 19,501 men age 50 yr and older. N/A. Two measures of sleep quality, over the past 30 days, were assessed alongside a number of sociodemographic variables, measures of quality of life, and comorbidities. Overall, 16.6% of participants reported severe/extreme nocturnal sleep problems, with a striking variation across the 8 populations, ranging from 3.9% (Purworejo, Indonesia and Nairobi, Kenya) to more than 40.0% (Matlab, Bangladesh). There was a consistent pattern of higher prevalence of sleep problems in women and older age groups. In bivariate analyses, lower education, not living in partnership, and poorer self-rated quality of life were consistently associated with higher prevalence of sleep problems (P < 0.001). In multivariate logistic regression analyses, limited physical functionality or greater disability and feelings of depression and anxiety were consistently strong, independent correlates of sleep problems, in both women and men, across the 8 sites (P < 0.001). A large number of older adults in low-income settings are currently experiencing sleep problems, which emphasizes the global dimension of this emerging public health issue. This study corroborates the multifaceted nature of sleep problems, which are strongly linked to poorer general well-being and quality of life, and psychiatric comorbidities.
Comer, Jonathan S.; Chow, Candice; Chan, Priscilla T.; Cooper-Vince, Christine; Wilson, Lianna A.S.
2012-01-01
Objective Service use trends showing increased off-label prescribing in very young children and reduced psychotherapy use raise concerns about quality of care for early disruptive behavior problems. Meta-analysis can empirically clarify best practices and guide clinical decision making by providing a quantitative synthesis of a body of literature, identifying the magnitude of overall effects across studies, and determining systematic factors associated with effect variations. Method We used random-effects meta-analytic procedures to empirically evaluate the overall effect of psychosocial treatments on early disruptive behavior problems, as well as potential moderators of treatment response. Thirty-six controlled trials, evaluating 3,042 children, met selection criteria (mean sample age, 4.7 years; 72.0% male; 33.1% minority youth). Results Psychosocial treatments collectively demonstrated a large and sustained effect on early disruptive behavior problems (Hedges’ g = 0.82), with the largest effects associated with behavioral treatments (Hedges’ g = 0.88), samples with higher proportions of older and male youth, and comparisons against treatment as usual (Hedges’ g = 1.17). Across trials, effects were largest for general externalizing problems (Hedges’ g =0.90) and problems of oppositionality and noncompliance (Hedges’ g = 0.76), and were weakest, relatively speaking, for problems of impulsivity and hyperactivity (Hedges’ g = 0.61). Conclusions In the absence of controlled trials evaluating psychotropic interventions, findings provide robust quantitative support that psychosocial treatments should constitute first-line treatment for early disruptive behavior problems. Against a backdrop of concerning trends in the availability and use of supported interventions, findings underscore the urgency of improving dissemination efforts for supported psychosocial treatment options, and removing systematic barriers to psychosocial care for affected youth. PMID:23265631
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
A novel neural network for variational inequalities with linear and nonlinear constraints.
Gao, Xing-Bao; Liao, Li-Zhi; Qi, Liqun
2005-11-01
Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild condition by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples.
A Fiducial Approach to Extremes and Multiple Comparisons
ERIC Educational Resources Information Center
Wandler, Damian V.
2010-01-01
Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…
2017-01-01
Objectives This study aimed to explore dimensions in addition to the 5 dimensions of the 5-level EQ-5D version (EQ-5D-5L) that could satisfactorily explain variation in health-related quality of life (HRQoL) in the general population of South Korea. Methods Domains related to HRQoL were searched through a review of existing HRQoL instruments. Among the 28 potential dimensions, the 5 dimensions of the EQ-5D-5L and 7 additional dimensions (vision, hearing, communication, cognitive function, social relationships, vitality, and sleep) were included. A representative sample of 600 subjects was selected for the survey, which was administered through face-to-face interviews. Subjects were asked to report problems in 12 health dimensions at 5 levels, as well as their self-rated health status using the EuroQol visual analogue scale (EQ-VAS) and a 5-point Likert scale. Among subjects who reported no problems for any of the parameters in the EQ-5D-5L, we analyzed the frequencies of problems in the additional dimensions. A linear regression model with the EQ-VAS as the dependent variable was performed to identify additional significant dimensions. Results Among respondents who reported full health on the EQ-5D-5L (n=365), 32% reported a problem for at least 1 additional dimension, and 14% reported worse than moderate self-rated health. Regression analysis revealed a R2 of 0.228 for the original EQ-5D-5L dimensions, 0.200 for the new dimensions, and 0.263 for the 12 dimensions together. Among the added dimensions, vitality and sleep were significantly associated with EQ-VAS scores. Conclusions This study identified significant dimensions for assessing self-rated health among members of the general public, in addition to the 5 dimensions of the EQ-5D-5L. These dimensions could be considered for inclusion in a new preference-based instrument or for developing a country-specific HRQoL instrument. PMID:29207449
Runoff response to climate change and human activities in a typical karst watershed, SW China.
Xu, Yan; Wang, Shijie; Bai, Xiaoyong; Shu, Dongcai; Tian, Yichao
2018-01-01
This study aims to reveal the runoff variation characteristics of long time series in a karst region, analyse comprehensively its different driving factors, and estimate quantitatively the contribution rates of climate change and human activities to net runoff variation. Liudong river basin, a typical karst watershed in southwest China, is the study site. Statistical methods, such as linear fitting, the Morlet wavelet analysis, normalized curve and double mass curve, are applied to analyse the runoff of the watershed. Results show that the runoff in the karst watershed during the research period exhibits a three-stage change and the abrupt change points are the years 1981 and 2007: (1) 1968-1980, the runoff initially exhibited a trend of sustained decreasing and then an abrupt fluctuation. The runoff was obviously destroyed through precipitation-producing processes. Improper land utilisation and serious forest and grass destruction intensified the fluctuation variation amplitude of the runoff. (2) 1981-2006, the changing processes of runoff and precipitation exhibited good synchronism. Precipitation significantly affected runoff variation and human activities had a slight interference degree. (3) 2007-2013, the fluctuation range of runoff was considerably smaller than that of precipitation. The significant growth of forest and grassland areas and the increase in water consumption mitigated runoff fluctuation and greatly diminished runoff variation amplitude. According to calculation, the relative contribution rates of precipitation and human activities to net runoff variation with 1981-2007 as the reference period were -81% and 181% in average, respectively, during 1968-1980, and -117% and 217% in average, respectively, during 2007-2013. In general, the analysis of runoff variation trend and of the contribution rate of its main influencing factors in the typical karst watershed for nearly half a century may be significant to solve the drought problem in the karst region and for the sustainable development of the drainage basin.
Runoff response to climate change and human activities in a typical karst watershed, SW China
Xu, Yan; Wang, Shijie; Shu, Dongcai; Tian, Yichao
2018-01-01
This study aims to reveal the runoff variation characteristics of long time series in a karst region, analyse comprehensively its different driving factors, and estimate quantitatively the contribution rates of climate change and human activities to net runoff variation. Liudong river basin, a typical karst watershed in southwest China, is the study site. Statistical methods, such as linear fitting, the Morlet wavelet analysis, normalized curve and double mass curve, are applied to analyse the runoff of the watershed. Results show that the runoff in the karst watershed during the research period exhibits a three-stage change and the abrupt change points are the years 1981 and 2007: (1) 1968–1980, the runoff initially exhibited a trend of sustained decreasing and then an abrupt fluctuation. The runoff was obviously destroyed through precipitation-producing processes. Improper land utilisation and serious forest and grass destruction intensified the fluctuation variation amplitude of the runoff. (2) 1981–2006, the changing processes of runoff and precipitation exhibited good synchronism. Precipitation significantly affected runoff variation and human activities had a slight interference degree. (3) 2007–2013, the fluctuation range of runoff was considerably smaller than that of precipitation. The significant growth of forest and grassland areas and the increase in water consumption mitigated runoff fluctuation and greatly diminished runoff variation amplitude. According to calculation, the relative contribution rates of precipitation and human activities to net runoff variation with 1981–2007 as the reference period were −81% and 181% in average, respectively, during 1968–1980, and −117% and 217% in average, respectively, during 2007–2013. In general, the analysis of runoff variation trend and of the contribution rate of its main influencing factors in the typical karst watershed for nearly half a century may be significant to solve the drought problem in the karst region and for the sustainable development of the drainage basin. PMID:29494602
Understanding Adult Age Differences in the Frequency of Problems With Friends.
Schlosnagle, Leo; Strough, JoNell
2017-01-01
We investigated characteristics of younger and older adults' friendships. Younger (N = 39) and older (N = 39) adults completed measures pertaining to a specific friend they had (i.e., contact frequency, positive friendship quality, and negative friendship quality) and their frequency of problems with friends in general. Older adults reported fewer problems with friends in general, and fewer negative friendship qualities, less frequent contact, and more positive friendship qualities with a specific friend than younger adults. Contact frequency, positive friendship quality, and negative friendship quality with a specific friend were related to frequency of problems with friends in general, but only contact frequency was a significant mediator of the relation between age and frequency of problems with friends in general. Results show that characteristics of a specific friendship relate to problems with friends in general, and that contact frequency with a specific friend mediates the relation between age and problems with friends in general. Implications are discussed. © The Author(s) 2016.
Strange Beta: Chaotic Variations for Indoor Rock Climbing Route Setting
NASA Astrophysics Data System (ADS)
Phillips, Caleb; Bradley, Elizabeth
2011-04-01
In this paper we apply chaotic systems to the task of sequence variation for the purpose of aiding humans in setting indoor rock climbing routes. This work expands on prior work where similar variations were used to assist in dance choreography and music composition. We present a formalization for transcription of rock climbing problems and a variation generator that is tuned for this domain and addresses some confounding problems, including a new approach to automatic selection of initial conditions. We analyze our system with a large blinded study in a commercial climbing gym in cooperation with experienced climbers and expert route setters. Our results show that our system is capable of assisting a human setter in producing routes that are at least as good as, and in some cases better than, those produced traditionally.
Application of Variational Methods to the Thermal Entrance Region of Ducts
NASA Technical Reports Server (NTRS)
Sparrow, E. M.; Siegel. R.
1960-01-01
A variational method is presented for solving eigenvalue problems which arise in connection with the analysis of convective heat transfer in the thermal entrance region of ducts. Consideration is given, to both situations where the temperature profile depends upon one cross-sectional coordinate (e.g. circular tube) or upon two cross-sectional coordinates (e.g. rectangular duct). The variational method is illustrated and verified by application to laminar heat transfer in a circular tube and a parallel-plate channel, and good agreement with existing numerical solutions is attained. Then, application is made to laminar heat transfer in a square duct as a check, an alternate computation for the square duct is made using a method indicated by Misaps and Pohihausen. The variational method can, in principle, also be applied to problems in turbulent heat transfer.
Wesseldijk, Laura W; Bartels, Meike; Vink, Jacqueline M; van Beijsterveldt, Catharina E M; Ligthart, Lannie; Boomsma, Dorret I; Middeldorp, Christel M
2017-06-21
Conduct problems in children and adolescents can predict antisocial personality disorder and related problems, such as crime and conviction. We sought an explanation for such predictions by performing a genetic longitudinal analysis. We estimated the effects of genetic, shared environmental, and unique environmental factors on variation in conduct problems measured at childhood and adolescence and antisocial personality problems measured at adulthood and on the covariation across ages. We also tested whether these estimates differed by sex. Longitudinal data were collected in the Netherlands Twin Register over a period of 27 years. Age appropriate and comparable measures of conduct and antisocial personality problems, assessed with the Achenbach System of Empirically Based Assessment, were available for 9783 9-10-year-old, 6839 13-18-year-old, and 7909 19-65-year-old twin pairs, respectively; 5114 twins have two or more assessments. At all ages, men scored higher than women. There were no sex differences in the estimates of the genetic and environmental influences. During childhood, genetic and environmental factors shared by children in families explained 43 and 44% of the variance of conduct problems, with the remaining variance due to unique environment. During adolescence and adulthood, genetic and unique environmental factors equally explained the variation. Longitudinal correlations across age varied between 0.20 and 0.38 and were mainly due to stable genetic factors. We conclude that shared environment is mainly of importance during childhood, while genetic factors contribute to variation in conduct and antisocial personality problems at all ages, and also underlie its stability over age.