Regularity of p(ṡ)-superharmonic functions, the Kellogg property and semiregular boundary points
NASA Astrophysics Data System (ADS)
Adamowicz, Tomasz; Björn, Anders; Björn, Jana
2014-11-01
We study various boundary and inner regularity questions for $p(\\cdot)$-(super)harmonic functions in Euclidean domains. In particular, we prove the Kellogg property and introduce a classification of boundary points for $p(\\cdot)$-harmonic functions into three disjoint classes: regular, semiregular and strongly irregular points. Regular and especially semiregular points are characterized in many ways. The discussion is illustrated by examples. Along the way, we present a removability result for bounded $p(\\cdot)$-harmonic functions and give some new characterizations of $W^{1, p(\\cdot)}_0$ spaces. We also show that $p(\\cdot)$-superharmonic functions are lower semicontinuously regularized, and characterize them in terms of lower semicontinuously regularized supersolutions.
A regularity result for fixed points, with applications to linear response
NASA Astrophysics Data System (ADS)
Sedro, Julien
2018-04-01
In this paper, we show a series of abstract results on fixed point regularity with respect to a parameter. They are based on a Taylor development taking into account a loss of regularity phenomenon, typically occurring for composition operators acting on spaces of functions with finite regularity. We generalize this approach to higher order differentiability, through the notion of an n-graded family. We then give applications to the fixed point of a nonlinear map, and to linear response in the context of (uniformly) expanding dynamics (theorem 3 and corollary 2), in the spirit of Gouëzel-Liverani.
A space-frequency multiplicative regularization for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.
The topology of the regularized integral surfaces of the 3-body problem
NASA Technical Reports Server (NTRS)
Easton, R.
1971-01-01
Momentum, angular momentum, and energy of integral surfaces in the planar three-body problem are considered. The end points of orbits which cross an isolating block are identified. It is shown that this identification has a unique extension to an identification which pairs the end points of orbits entering the block and which end in a binary collision with the end points of orbits leaving the block and which come from a binary collision. The problem of regularization is that of showing that the identification of the end points of crossing orbits has a continuous, unique extension. The regularized phase space for the three-body problem was obtained, as were regularized integral surfaces for the problem on which the three-body equations of motion induce flows. Finally the topology of these surfaces is described.
Regularity and Tresse's theorem for geometric structures
NASA Astrophysics Data System (ADS)
Sarkisyan, R. A.; Shandra, I. G.
2008-04-01
For any non-special bundle P\\to X of geometric structures we prove that the k-jet space J^k of this bundle with an appropriate k contains an open dense domain U_k on which Tresse's theorem holds. For every s\\geq k we prove that the pre-image \\pi^{-1}(k,s)(U_k) of U_k under the natural projection \\pi(k,s)\\colon J^s\\to J^k consists of regular points. (A point of J^s is said to be regular if the orbits of the group of diffeomorphisms induced from X have locally constant dimension in a neighbourhood of this point.)
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
Moduli of quantum Riemannian geometries on <=4 points
NASA Astrophysics Data System (ADS)
Majid, S.; Raineri, E.
2004-12-01
We classify parallelizable noncommutative manifold structures on finite sets of small size in the general formalism of framed quantum manifolds and vielbeins introduced previously [S. Majid, Commun. Math. Phys. 225, 131 (2002)]. The full moduli space is found for ⩽3 points, and a restricted moduli space for 4 points. Generalized Levi-Cività connections and their curvatures are found for a variety of models including models of a discrete torus. The topological part of the moduli space is found for ⩽9 points based on the known atlas of regular graphs. We also remark on aspects of quantum gravity in this approach.
Sugiyama, Takemi; Giles-Corti, Billie; Summers, Jacqui; du Toit, Lorinne; Leslie, Eva; Owen, Neville
2013-09-01
This study examined prospective relationships of green space attributes with adults initiating or maintaining recreational walking. Postal surveys were completed by 1036 adults living in Adelaide, Australia, at baseline (two time points in 2003-04) and follow-up (2007-08). Initiating or maintaining recreational walking was determined using self-reported walking frequency. Green space attributes examined were perceived presence, quality, proximity, and the objectively measured area (total and largest) and number of green spaces within a 1.6 km buffer drawn from the center of each study neighborhood. Multilevel regression analyses examined the odds of initiating or maintaining walking separately for each green space attribute. At baseline, participants were categorized into non-regular (n = 395), regular (n = 286), and irregular walkers (n = 313). Among non-regular walkers, 30% had initiated walking, while 70% of regular walkers had maintained walking at follow-up. No green space attributes were associated with initiating walking. However, positive perceptions of the presence of and proximity to green spaces and the total and largest areas of green space were significantly associated with a higher likelihood of walking maintenance over four years. Neighborhood green spaces may not assist adults to initiate walking, but their presence and proximity may facilitate them to maintain recreational walking over time. Copyright © 2013 Elsevier Inc. All rights reserved.
Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization
NASA Astrophysics Data System (ADS)
Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna
2014-12-01
We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.
Above Saddle-Point Regions of Order in a Sea of Chaos in the Vibrational Dynamics of KCN.
Párraga, H; Arranz, F J; Benito, R M; Borondo, F
2018-04-05
The dynamical characteristics of a region of regular vibrational motion in the sea of chaos above the saddle point corresponding to the linear C-N-K configuration is examined in detail. To explain the origin of this regularity, the associated phase space structures were characterized using suitably defined Poincaré surfaces of section, identifying the different resonances between the stretching and bending modes, as a function of excitation energy. The corresponding topology is elucidated by means of periodic orbit analysis.
Dynamic positioning configuration and its first-order optimization
NASA Astrophysics Data System (ADS)
Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu
2014-02-01
Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.
Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.
Zheng, Yi; Peter, Michael; Zhong, Ruofei; Oude Elberink, Sander; Zhou, Quan
2018-06-05
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.
NASA Astrophysics Data System (ADS)
Chen, Li
1999-09-01
According to a general definition of discrete curves, surfaces, and manifolds (Li Chen, 'Generalized discrete object tracking algorithms and implementations, ' In Melter, Wu, and Latecki ed, Vision Geometry VI, SPIE Vol. 3168, pp 184 - 195, 1997.). This paper focuses on the Jordan curve theorem in 2D discrete spaces. The Jordan curve theorem says that a (simply) closed curve separates a simply connected surface into two components. Based on the definition of discrete surfaces, we give three reasonable definitions of simply connected spaces. Theoretically, these three definition shall be equivalent. We have proved the Jordan curve theorem under the third definition of simply connected spaces. The Jordan theorem shows the relationship among an object, its boundary, and its outside area. In continuous space, the boundary of an mD manifold is an (m - 1)D manifold. The similar result does apply to regular discrete manifolds. The concept of a new regular nD-cell is developed based on the regular surface point in 2D, and well-composed objects in 2D and 3D given by Latecki (L. Latecki, '3D well-composed pictures,' In Melter, Wu, and Latecki ed, Vision Geometry IV, SPIE Vol 2573, pp 196 - 203, 1995.).
ERIC Educational Resources Information Center
Reiter, Harold; Holshouser, Arthur; Vennebush, Patrick
2012-01-01
Getting students to think about the relationships between area and perimeter beyond the formulas for these measurements is never easy. An interesting, nonroutine, and accessible problem that will stimulate such thoughts is the Lattice Octagon problem. A "lattice polygon" is a polygon whose vertices are points of a regularly spaced array.…
A color gamut description algorithm for liquid crystal displays in CIELAB space.
Sun, Bangyong; Liu, Han; Li, Wenli; Zhou, Shisheng
2014-01-01
Because the accuracy of gamut boundary description is significant for gamut mapping process, a gamut boundary calculating method for LCD monitors is proposed in this paper. Within most of the previous gamut boundary calculation algorithms, the gamut boundary is calculated in CIELAB space directly, and part of inside-gamut points are mistaken for the boundary points. While, in the new proposed algorithm, the points on the surface of RGB cube are selected as the boundary points, and then converted and described in CIELAB color space. Thus, in our algorithm, the true gamut boundary points are found and a more accurate gamut boundary is described. In experiment, a Toshiba LCD monitor's 3D CIELAB gamut for evaluation is firstly described which has regular-shaped outer surface, and then two 2D gamut boundaries ( CIE-a*b* boundary and CIE-C*L* boundary) are calculated which are often used in gamut mapping process. When our algorithm is compared with several famous gamut calculating algorithms, the gamut volumes are very close, which indicates that our algorithm's accuracy is precise and acceptable.
A Color Gamut Description Algorithm for Liquid Crystal Displays in CIELAB Space
Sun, Bangyong; Liu, Han; Li, Wenli; Zhou, Shisheng
2014-01-01
Because the accuracy of gamut boundary description is significant for gamut mapping process, a gamut boundary calculating method for LCD monitors is proposed in this paper. Within most of the previous gamut boundary calculation algorithms, the gamut boundary is calculated in CIELAB space directly, and part of inside-gamut points are mistaken for the boundary points. While, in the new proposed algorithm, the points on the surface of RGB cube are selected as the boundary points, and then converted and described in CIELAB color space. Thus, in our algorithm, the true gamut boundary points are found and a more accurate gamut boundary is described. In experiment, a Toshiba LCD monitor's 3D CIELAB gamut for evaluation is firstly described which has regular-shaped outer surface, and then two 2D gamut boundaries ( CIE-a*b* boundary and CIE-C*L* boundary) are calculated which are often used in gamut mapping process. When our algorithm is compared with several famous gamut calculating algorithms, the gamut volumes are very close, which indicates that our algorithm's accuracy is precise and acceptable. PMID:24892068
A regularized approach for geodesic-based semisupervised multimanifold learning.
Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun
2014-05-01
Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.
NASA Technical Reports Server (NTRS)
Willmott, C. J.; Field, R. T.
1984-01-01
Algorithms for point interpolation and contouring on the surface of the sphere and in Cartesian two-space are developed from Shepard's (1968) well-known, local search method. These mapping procedures then are used to investigate the errors which appear on small-scale climate maps as a result of the all-too-common practice of of interpolating, from irregularly spaced data points to the nodes of a regular lattice, and contouring Cartesian two-space. Using mean annual air temperatures field over the western half of the northern hemisphere is estimated both on the sphere, assumed to be correct, and in Cartesian two-space. When the spherically- and Cartesian-approximted air temperature fields are mapped and compared, the magnitudes (as large as 5 C to 10 C) and distribution of the errors associated with the latter approach become apparent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kansa, E.J.; Axelrod, M.C.; Kercher, J.R.
1994-05-01
Our current research into the response of natural ecosystems to a hypothesized climatic change requires that we have estimates of various meteorological variables on a regularly spaced grid of points on the surface of the earth. Unfortunately, the bulk of the world`s meteorological measurement stations is located at airports that tend to be concentrated on the coastlines of the world or near populated areas. We can also see that the spatial density of the station locations is extremely non-uniform with the greatest density in the USA, followed by Western Europe. Furthermore, the density of airports is rather sparse in desertmore » regions such as the Sahara, the Arabian, Gobi, and Australian deserts; likewise the density is quite sparse in cold regions such as Antarctica Northern Canada, and interior northern Russia. The Amazon Basin in Brazil has few airports. The frequency of airports is obviously related to the population centers and the degree of industrial development of the country. We address the following problem here. Given values of meteorological variables, such as maximum monthly temperature, measured at the more than 5,500 airport stations, interpolate these values onto a regular grid of terrestrial points spaced by one degree in both latitude and longitude. This is known as the scattered data problem.« less
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Cosmological space-times with resolved Big Bang in Yang-Mills matrix models
NASA Astrophysics Data System (ADS)
Steinacker, Harold C.
2018-02-01
We present simple solutions of IKKT-type matrix models that can be viewed as quantized homogeneous and isotropic cosmological space-times, with finite density of microstates and a regular Big Bang (BB). The BB arises from a signature change of the effective metric on a fuzzy brane embedded in Lorentzian target space, in the presence of a quantized 4-volume form. The Hubble parameter is singular at the BB, and becomes small at late times. There is no singularity from the target space point of view, and the brane is Euclidean "before" the BB. Both recollapsing and expanding universe solutions are obtained, depending on the mass parameters.
Chemical potential driven phase transition of black holes in anti-de Sitter space
NASA Astrophysics Data System (ADS)
Galante, Mario; Giribet, Gaston; Goya, Andrés; Oliva, Julio
2015-11-01
Einstein-Maxwell theory conformally coupled to a scalar field in D dimensions may exhibit a phase transition at low temperature whose end point is an asymptotically anti-de Sitter black hole with a scalar field profile that is regular everywhere outside and on the horizon. This provides a tractable model to study the phase transition of hairy black holes in anti-de Sitter space in which the backreaction on the geometry can be solved analytically.
Quasi-hamiltonian quotients as disjoint unions of symplectic manifolds
NASA Astrophysics Data System (ADS)
Schaffhauser, Florent
2007-08-01
The main result of this paper is Theorem 2.12 which says that the quotient μ-1({1})/U associated to a quasi-hamiltonian space (M, ω, μ: M → U) has a symplectic structure even when 1 is not a regular value of the momentum map μ. Namely, it is a disjoint union of symplectic manifolds of possibly different dimensions, which generalizes the result of Alekseev, Malkin and Meinrenken in [AMM98]. We illustrate this theorem with the example of representation spaces of surface groups. As an intermediary step, we give a new class of examples of quasi-hamiltonian spaces: the isotropy submanifold MK whose points are the points of M with isotropy group K ⊂ U. The notion of quasi-hamiltonian space was introduced by Alekseev, Malkin and Meinrenken in their paper [AMM98]. The main motivation for it was the existence, under some regularity assumptions, of a symplectic structure on the associated quasi-hamiltonian quotient. Throughout their paper, the analogy with usual hamiltonian spaces is often used as a guiding principle, replacing Lie-algebra-valued momentum maps with Lie-group-valued momentum maps. In the hamiltonian setting, when the usual regularity assumptions on the group action or the momentum map are dropped, Lerman and Sjamaar showed in [LS91] that the quotient associated to a hamiltonian space carries a stratified symplectic structure. In particular, this quotient space is a disjoint union of symplectic manifolds. In this paper, we prove an analogous result for quasi-hamiltonian quotients. More precisely, we show that for any quasi-hamiltonian space (M, ω, μ: M → U), the associated quotient M//U := μ-1({1})/U is a disjoint union of symplectic manifolds (Theorem 2.12): [ mu^{-1}(\\{1\\})/U = bigsqcup_{jin J} (mu^{-1}(\\{1\\})\\cap M_{K_j})/L_{K_j} . ] Here Kj denotes a closed subgroup of U and M
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Symmetries for Light-Front Quantization of Yukawa Model with Renormalization
NASA Astrophysics Data System (ADS)
Żochowski, Jan; Przeszowski, Jerzy A.
2017-12-01
In this work we discuss the Yukawa model with the extra term of self-interacting scalar field in D=1+3 dimensions. We present the method of derivation the light-front commutators and anti-commutators from the Heisenberg equations induced by the kinematical generating operator of the translation P+. Mentioned Heisenberg equations are the starting point for obtaining this algebra of the (anti-) commutators. Some discrepancies between existing and proposed method of quantization are revealed. The Lorentz and the CPT symmetry, together with some features of the quantum theory were applied to obtain the two-point Wightman function for the free fermions. Moreover, these Wightman functions were computed especially without referring to the Fock expansion. The Gaussian effective potential for the Yukawa model was found in the terms of the Wightman functions. It was regularized by the space-like point-splitting method. The coupling constants within the model were redefined. The optimum mass parameters remained regularization independent. Finally, the Gaussian effective potential was renormalized.
Comment on "Construction of regular black holes in general relativity"
NASA Astrophysics Data System (ADS)
Bronnikov, Kirill A.
2017-12-01
We claim that the paper by Zhong-Ying Fan and Xiaobao Wang on nonlinear electrodynamics coupled to general relativity [Phys. Rev. D 94,124027 (2016)], although correct in general, in some respects repeats previously obtained results without giving proper references. There is also an important point missing in this paper, which is necessary for understanding the physics of the system: in solutions with an electric charge, a regular center requires a non-Maxwell behavior of Lagrangian function L (f ) , (f =Fμ νFμ ν) at small f . Therefore, in all electric regular black hole solutions with a Reissner-Nordström asymptotic, the Lagrangian L (f ) is different in different parts of space, and the electromagnetic field behaves in a singular way at surfaces where L (f ) suffers branching.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
Weighted regularized statistical shape space projection for breast 3D model reconstruction.
Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González
2018-07-01
The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
Discovering Structural Regularity in 3D Geometry
Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.
2010-01-01
We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292
Switching Dynamics Between Two Movement Patterns Varies According to Time Interval
NASA Astrophysics Data System (ADS)
Hirakawa, Takehito; Suzuki, Hiroo; Okumura, Motoki; Gohara, Kazutoshi; Yamamoto, Yuji
This study investigated the regularity that characterizes the behavior of dissipative dynamical systems excited by external temporal inputs for pointing movements. Right-handed healthy male participants were asked to continuously point their right index finger at two light-emitting diodes (LEDs) located in the oblique left and right directions in front of them. These movements were performed under two conditions: one in which the direction was repeated and one in which the directions were switched on a stochastic basis. These conditions consisted of 12 tempos (30, 36, 42, 48, 51, 54, 57, 60, 63, 66, 69, and 72 beats per minute). Data from the conditions under which the input pattern was repeated revealed two different trajectories in hyper-cylindrical state space ℳ, whereas the conditions under which the inputs were switched induced transitions between the two trajectories, which were considered to be excited attractors. The transitions between the two excited attractors were characterized by a self-similar structure. Moreover, the correlation dimensions increased as the tempos increased. These results suggest a relationship of D ∝ 1/T (T is the switching-time length; i.e. the condition) between temporal input and pointing behavior and that continuous pointing movements are regular rather than random noise.
NASA Astrophysics Data System (ADS)
Ryzhov, Eugene
2015-11-01
Vortex motion in shear flows is of great interest from the point of view of nonlinear science, and also as an applied problem to predict the evolution of vortices in nature. Considering applications to the ocean and atmosphere, it is well-known that these media are significantly stratified. The simplest way to take stratification into account is to deal with a two-layer flow. In this case, vortices perturb the interface, and consequently, the perturbed interface transits the vortex influences from one layer to another. Our aim is to investigate the dynamics of two point vortices in an unbounded domain where a shear and rotation are imposed as the leading order influence from some generalized perturbation. The two vortices are arranged within the bottom layer, but an emphasis is on the upper-layer fluid particle motion. Point vortices induce singular velocity fields in the layer they belong to, however, in the other layers of a multi-layer flow, they induce regular velocity fields. The main feature is that singular velocity fields prohibit irregular dynamics in the vicinity of the singular points, but regular velocity fields, provided optimal conditions, permit irregular dynamics to extend almost in every point of the corresponding phase space.
Simple picture for neutrino flavor transformation in supernovae
NASA Astrophysics Data System (ADS)
Duan, Huaiyu; Fuller, George M.; Qian, Yong-Zhong
2007-10-01
We can understand many recently discovered features of flavor evolution in dense, self-coupled supernova neutrino and antineutrino systems with a simple, physical scheme consisting of two quasistatic solutions. One solution closely resembles the conventional, adiabatic single-neutrino Mikheyev-Smirnov-Wolfenstein (MSW) mechanism, in that neutrinos and antineutrinos remain in mass eigenstates as they evolve in flavor space. The other solution is analogous to the regular precession of a gyroscopic pendulum in flavor space, and has been discussed extensively in recent works. Results of recent numerical studies are best explained with combinations of these solutions in the following general scenario: (1) Near the neutrino sphere, the MSW-like many-body solution obtains. (2) Depending on neutrino vacuum mixing parameters, luminosities, energy spectra, and the matter density profile, collective flavor transformation in the nutation mode develops and drives neutrinos away from the MSW-like evolution and toward regular precession. (3) Neutrino and antineutrino flavors roughly evolve according to the regular precession solution until neutrino densities are low. In the late stage of the precession solution, a stepwise swapping develops in the energy spectra of νe and νμ/ντ. We also discuss some subtle points regarding adiabaticity in flavor transformation in dense-neutrino systems.
A Hybrid 3D Indoor Space Model
NASA Astrophysics Data System (ADS)
Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra
2014-01-01
Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D “connectome space,” offering an additional layer of interpretability that could provide new insights about various disease processes. PMID:24704268
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
LP-stability for the strong solutions of the Navier-Stokes equations in the whole space
NASA Astrophysics Data System (ADS)
Beiraodaveiga, H.; Secchi, P.
1985-10-01
We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations, is still an open problem. From either the mathematical and the physical point of view, an interesting property is the stability (or not) of the (eventual) global regular solutions. Here, we assume that v1(t,x) is a solution, with initial data a1(x). For small perturbations of a1, we want the solution v1(t,x) being slightly perturbed, too. Due to viscosity, it is even expected that the perturbed solution v2(t,x) approaches the unperturbed one, as time goes to + infinity. This is just the result proved in this paper. To measure the distance between v1(t,x) and v2(t,x), at each time t, suitable norms are introduced (LP-norms). For fluids filling a bounded vessel, exponential decay of the above distance, is expected. Such a strong result is not reasonable, for fluids filling the entire space.
Processing and statistical analysis of soil-root images
NASA Astrophysics Data System (ADS)
Razavi, Bahar S.; Hoang, Duyen; Kuzyakov, Yakov
2016-04-01
Importance of the hotspots such as rhizosphere, the small soil volume that surrounds and is influenced by plant roots, calls for spatially explicit methods to visualize distribution of microbial activities in this active site (Kuzyakov and Blagodatskaya, 2015). Zymography technique has previously been adapted to visualize the spatial dynamics of enzyme activities in rhizosphere (Spohn and Kuzyakov, 2014). Following further developing of soil zymography -to obtain a higher resolution of enzyme activities - we aimed to 1) quantify the images, 2) determine whether the pattern (e.g. distribution of hotspots in space) is clumped (aggregated) or regular (dispersed). To this end, we incubated soil-filled rhizoboxes with maize Zea mays L. and without maize (control box) for two weeks. In situ soil zymography was applied to visualize enzymatic activity of β-glucosidase and phosphatase at soil-root interface. Spatial resolution of fluorescent images was improved by direct application of a substrate saturated membrane to the soil-root system. Furthermore, we applied "spatial point pattern analysis" to determine whether the pattern (e.g. distribution of hotspots in space) is clumped (aggregated) or regular (dispersed). Our results demonstrated that distribution of hotspots at rhizosphere is clumped (aggregated) compare to control box without plant which showed regular (dispersed) pattern. These patterns were similar in all three replicates and for both enzymes. We conclude that improved zymography is promising in situ technique to identify, analyze, visualize and quantify spatial distribution of enzyme activities in the rhizosphere. Moreover, such different patterns should be considered in assessments and modeling of rhizosphere extension and the corresponding effects on soil properties and functions. Key words: rhizosphere, spatial point pattern, enzyme activity, zymography, maize.
Regular Decompositions for H(div) Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Vassilevski, Panayot
We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.
A planetary telescope at the ISS
NASA Astrophysics Data System (ADS)
Korablev, O.; Moroz, V.; Avanesov, G.; Rodin, V.; Bellucci, G.; Vid Machenko, A.; Tejfel, V.
We present the development of a 40-cm telescope to be deployed at the Russian segment of International Space Station (ISS) dedicated to the observations of planets of Solar system, which primary goal will be tracking climate-related changes and other variable phenomena on planets. The most effective will be the observations of Venus, Mars, Jupiter, Saturn, and comets, while other interesting targets will be certainly considered. This space-based observatory will perform monitoring of Solar System objects on regular basis The observatory includes the 40-cm narrow-field (f:20) telescope at a pointing platform with guidance system assuring pointing accuracy of ~10", and an internal tracking system with an accuracy inferior to 1" during tens of minutes. Four focal plane instruments, a camera, two spectrometers and a spectropolarimeter, will perform imaging and spectral observations in the range from ~200 nm to ~3 μm.
C library for topological study of the electronic charge density.
Vega, David; Aray, Yosslen; Rodríguez, Jesús
2012-12-05
The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.
Scalar field cosmologies with inverted potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boisseau, B.; Giacomini, H.; Polarski, D., E-mail: bruno.boisseau@lmpt.univ-tours.fr, E-mail: hector.giacomini@lmpt.univ-tours.fr, E-mail: david.polarski@umontpellier.fr
Regular bouncing solutions in the framework of a scalar-tensor gravity model were found in a recent work. We reconsider the problem in the Einstein frame (EF) in the present work. Singularities arising at the limit of physical viability of the model in the Jordan frame (JF) are either of the Big Bang or of the Big Crunch type in the EF. As a result we obtain integrable scalar field cosmological models in general relativity (GR) with inverted double-well potentials unbounded from below which possess solutions regular in the future, tending to a de Sitter space, and starting with a Bigmore » Bang. The existence of the two fixed points for the field dynamics at late times found earlier in the JF becomes transparent in the EF.« less
Stability and chaos in Kustaanheimo-Stiefel space induced by the Hopf fibration
NASA Astrophysics Data System (ADS)
Roa, Javier; Urrutxua, Hodei; Peláez, Jesús
2016-07-01
The need for the extra dimension in Kustaanheimo-Stiefel (KS) regularization is explained by the topology of the Hopf fibration, which defines the geometry and structure of KS space. A trajectory in Cartesian space is represented by a four-dimensional manifold called the fundamental manifold. Based on geometric and topological aspects classical concepts of stability are translated to KS language. The separation between manifolds of solutions generalizes the concept of Lyapunov stability. The dimension-raising nature of the fibration transforms fixed points, limit cycles, attractive sets, and Poincaré sections to higher dimensional subspaces. From these concepts chaotic systems are studied. In strongly perturbed problems, the numerical error can break the topological structure of KS space: points in a fibre are no longer transformed to the same point in Cartesian space. An observer in three dimensions will see orbits departing from the same initial conditions but diverging in time. This apparent randomness of the integration can only be understood in four dimensions. The concept of topological stability results in a simple method for estimating the time-scale in which numerical simulations can be trusted. Ideally, all trajectories departing from the same fibre should be KS transformed to a unique trajectory in three-dimensional space, because the fundamental manifold that they constitute is unique. By monitoring how trajectories departing from one fibre separate from the fundamental manifold a critical time, equivalent to the Lyapunov time, is estimated. These concepts are tested on N-body examples: the Pythagorean problem, and an example of field stars interacting with a binary.
Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.
Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing
2012-11-01
In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.
Hierarchical collapse of regular islands via dissipation
NASA Astrophysics Data System (ADS)
Jousseph, C. A. C.; Abdulack, S. A.; Manchein, C.; Beims, M. W.
2018-03-01
In this work we investigate how regular islands localized in a mixed phase-space of generic area-preserving Hamiltonian systems are affected by a small amount of dissipation. Mainly we search for a universality (hierarchy) in the convergence of higher-order resonances and their periods when dissipation increases. One very simple scenario is already known: when subjected to small dissipation, stable periodic points become sinks attracting almost all the surrounding orbits, destroying all invariant curves which divide the phase-space in chaotic and regular domains. However, performing numerical experiments with the paradigmatic Chirikov-Taylor standard mapping we show that this presumably simple scenario can be rather complicated. The first, not trivial, scenario is what happens to chaotic trajectories, since they can be attracted by the sinks or by chaotic attractors, in cases when they exist. We show that this depends very much on how basins of attraction are formed as dissipation increases. In addition, we demonstrate that higher-order resonances are usually first affected by small dissipation when compared to lower-order resonances from the conservative case. Nevertheless, this is not a generic behaviour. We show that a local hierarchical collapse of resonances, as dissipation increases, is related to the area of the islands from the conservative case surrounding the periodic orbits. All observed resonance destructions occur via the bifurcation phenomena and are quantified here by determining the largest finite-time Lyapunov exponent.
On the chiral magnetic effect in Weyl superfluid 3He-A
NASA Astrophysics Data System (ADS)
Volovik, G. E.
2017-01-01
In the theory of the chiral anomaly in relativistic quantum field theories (RQFTs), some results depend on a regularization scheme at ultraviolet. In the chiral superfluid 3He-A, which contains two Weyl points and also experiences the effects of chiral anomaly, the "trans-Planckian" physics is known and the results can be obtained without regularization. We discuss this on example of the chiral magnetic effect (CME), which has been observed in 3He-A in the 1990s [1]. There are two forms of the contribution of the CME to the Chern-Simons term in free energy, perturbative and non-perturbative. The perturbative term comes from the fermions living in the vicinity of the Weyl point, where the fermions are "relativistic" and obey the Weyl equation. The non-perturbative term originates from the deep vacuum, being determined by the separation of the two Weyl points in momentum space. Both terms are obtained using the Adler-Bell-Jackiw equation for chiral anomaly, and both agree with the results of the microscopic calculations in the "trans-Planckian" region. Existence of the two nonequivalent forms of the Chern-Simons term demonstrates that the results obtained within the RQFT depend on the specific properties of the underlying quantum vacuum and may reflect different physical phenomena in the same vacuum.
49 CFR 176.708 - Segregation distances.
Code of Federal Regulations, 2013 CFR
2013-10-01
... distances between radioactive materials and spaces regularly occupied by crew members or passengers, or... or YELLOW-III packages or overpacks must not be transported in spaces occupied by passengers, except... regularly occupied spaces or living quarters; or (2) For one or more consignments of Class 7 (radioactive...
49 CFR 176.708 - Segregation distances.
Code of Federal Regulations, 2014 CFR
2014-10-01
... distances between radioactive materials and spaces regularly occupied by crew members or passengers, or... or YELLOW-III packages or overpacks must not be transported in spaces occupied by passengers, except... regularly occupied spaces or living quarters; or (2) For one or more consignments of Class 7 (radioactive...
Degeneration of Bethe subalgebras in the Yangian of gl_n
NASA Astrophysics Data System (ADS)
Ilin, Aleksei; Rybnikov, Leonid
2018-04-01
We study degenerations of Bethe subalgebras B( C) in the Yangian Y(gl_n), where C is a regular diagonal matrix. We show that closure of the parameter space of the family of Bethe subalgebras, which parameterizes all possible degenerations, is the Deligne-Mumford moduli space of stable rational curves \\overline{M_{0,n+2}}. All subalgebras corresponding to the points of \\overline{M_{0,n+2}} are free and maximal commutative. We describe explicitly the "simplest" degenerations and show that every degeneration is the composition of the simplest ones. The Deligne-Mumford space \\overline{M_{0,n+2}} generalizes to other root systems as some De Concini-Procesi resolution of some toric variety. We state a conjecture generalizing our results to Bethe subalgebras in the Yangian of arbitrary simple Lie algebra in terms of this De Concini-Procesi resolution.
A hierarchical Bayesian method for vibration-based time domain force reconstruction problems
NASA Astrophysics Data System (ADS)
Li, Qiaofeng; Lu, Qiuhai
2018-05-01
Traditional force reconstruction techniques require prior knowledge on the force nature to determine the regularization term. When such information is unavailable, the inappropriate term is easily chosen and the reconstruction result becomes unsatisfactory. In this paper, we propose a novel method to automatically determine the appropriate q as in ℓq regularization and reconstruct the force history. The method incorporates all to-be-determined variables such as the force history, precision parameters and q into a hierarchical Bayesian formulation. The posterior distributions of variables are evaluated by a Metropolis-within-Gibbs sampler. The point estimates of variables and their uncertainties are given. Simulations of a cantilever beam and a space truss under various loading conditions validate the proposed method in providing adaptive determination of q and better reconstruction performance than existing Bayesian methods.
Optimal state points of the subadditive ergodic theorem
NASA Astrophysics Data System (ADS)
Dai, Xiongping
2011-05-01
Let T\\colon (X,\\mathscr{B},\\mu)\\rightarrow(X,\\mathscr{B},\\mu) be an ergodic measure-preserving Borel measurable transformation of a separable metric space X that is not necessarily compact, and suppose that \\{\\varphi_n\\}_{n\\ge1}\\colon X\\rightarrow{R}\\cup\\{-\\infty\\} is a T-subadditive sequence of \\mathscr{B} -measurable upper-bounded functions. In this paper, we prove that, if the sets D_{\\varphi_n} of phivn-discontinuities are of μ-measure 0 for all n >= 1 and if the growth rates \\[ \\begin{equation*} {\\bvarphi}^*(x):=\\limsup_{n\\to+\\infty}\\frac{1}{n}\\varphi_n(x)<0\\tqs for \\mu-a.e. x\\in X, \\end{equation*} \\] then bold varphi*(x) < 0 for all points x in the basin BT(μ) of (T, μ). We apply this to considering the Oseledets regular points.
NASA Technical Reports Server (NTRS)
Harten, A.; Tal-Ezer, H.
1981-01-01
This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.
Orbital Maneuvers for Spacecrafts Travelling to/from the Lagrangian Points
NASA Astrophysics Data System (ADS)
Bertachini, A.
The well-known Lagrangian points that appear in the planar restricted three-body problem (Szebehely, 1967) are very important for astronautical applications. They are five points of equilibrium in the equations of motion, what means that a particle located at one of those points with zero velocity will remain there indefinitely. The collinear points (L1, L2 and L3) are always unstable and the triangular points (L4 and L5) are stable in the present case studied (Sun-Earth system). They are all very good points to locate a space-station, since they require a small amount of V (and fuel), the control to be used for station-keeping. The triangular points are specially good for this purpose, since they are stable equilibrium points. In this paper, the planar restricted three-body problem is regularized (using Lemaître regularization) and combined with numerical integration and gradient methods to solve the two point boundary value problem (the Lambert's three-body problem). This combination is applied to the search of families of transfer orbits between the Lagrangian points and the Earth, in the Sun-Earth system, with the minimum possible cost of the control used. So, the final goal of this paper is to find the magnitude and direction of the two impulses to be applied in the spacecraft to complete the transfer: the first one when leaving/arriving at the Lagrangian point and the second one when arriving/living at the Earth. This paper is a continuation of two previous papers that studied transfers in the Earth-Moon system: Broucke (1979), that studied transfer orbits between the Lagrangian points and the Moon and Prado (1996), that studied transfer orbits between the Lagrangian points and the Earth. So, the equations of motion are: whereis the pseudo-potential given by: To solve the TPBVP in the regularized variables the following steps are used: i) Guess a initial velocity Vi, so together with the initial prescribed position ri the complete initial state is known; ii) Guess a final regularized time f and integrate the regularized equations of motion from 0 = 0 until f; iii) Check the final position rf obtained from the numerical integration with the prescribed final position and the final real time with the specified time of flight. If there is an agreement (difference less than a specified error allowed) the solution is found and the process can stop here. If there is no agreement, an increment in the initial guessed velocity Vi and in the guessed final regularized time is made and the process goes back to step i). The method used to find the increment in the guessed variables is the standard gradient method, as described in Press et. al., 1989. The routines available in this reference are also used in this research with minor modifications. After that this algorithm is implemented, the Lambert's three-body problem between the Earth and the Lagrangian points is solved for several values of the time of flight. Since the regularized system is used to solve this problem, there is no need to specify the final position of M3 as lying in an primary's parking orbit (to avoid the singularity). Then, to make a comparison with previous papers (Broucke, 1979 and Prado, 1996) the centre of the primary is used as the final position for M3. The results are organized in plots of the energy and the initial flight path angle (the control to be used) in the rotating frame against the time of flight. The definition of the angle is such that the zero is in the "x" axis, (pointing to the positive direction) and it increases in the counter-clock-wise sense. This problem, as well as the Lambert's original version, has two solutions for a given transfer time: one in the counter-clock-wise direction and one in the clock-wise direction in the inertial frame. In this paper, emphasis is given in finding the families with the smallest possible energy (and velocity), although many other families do exist. Broucke, R., (1979) Travelling Between the Lagrange Points and the Moon, Journal of Guidance and Control, Vol. 2, Prado, A.F.B.A., (1969) Travelling Between the Lagrangian Points and the Earth, Acta Astronautica, Vol. 39, No. 7, pp. Press, W. H.; B. P. Flannery; S. A. Teukolsky and W. T. Vetterling (1989), Numerical Recipes, Cambridge University Szebehely, V., (1967), Theory of Orbits, Academic Press, New York.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Percolation in random-Sierpiński carpets: A real space renormalization group approach
NASA Astrophysics Data System (ADS)
Perreau, Michel; Peiro, Joaquina; Berthier, Serge
1996-11-01
The site percolation transition in random Sierpiński carpets is investigated by real space renormalization. The fixed point is not unique like in regular translationally invariant lattices, but depends on the number k of segmentation steps of the generation process of the fractal. It is shown that, for each scale invariance ratio n, the sequence of fixed points pn,k is increasing with k, and converges when k-->∞ toward a limit pn strictly less than 1. Moreover, in such scale invariant structures, the percolation threshold does not depend only on the scale invariance ratio n, but also on the scale. The sequence pn,k and pn are calculated for n=4, 8, 16, 32, and 64, and for k=1 to k=11, and k=∞. The corresponding thermal exponent sequence νn,k is calculated for n=8 and 16, and for k=1 to k=5, and k=∞. Suggestions are made for an experimental test in physical self-similar structures.
Space-Group Symmetries Generate Chaotic Fluid Advection in Crystalline Granular Media
NASA Astrophysics Data System (ADS)
Turuban, R.; Lester, D. R.; Le Borgne, T.; Méheust, Y.
2018-01-01
The classical connection between symmetry breaking and the onset of chaos in dynamical systems harks back to the seminal theory of Noether [Transp. Theory Statist. Phys. 1, 186 (1918), 10.1080/00411457108231446]. We study the Lagrangian kinematics of steady 3D Stokes flow through simple cubic and body-centered cubic (bcc) crystalline lattices of close-packed spheres, and uncover an important exception. While breaking of point-group symmetries is a necessary condition for chaotic mixing in both lattices, a further space-group (glide) symmetry of the bcc lattice generates a transition from globally regular to globally chaotic dynamics. This finding provides new insights into chaotic mixing in porous media and has significant implications for understanding the impact of symmetries upon generic dynamical systems.
Psychoacoustic Testing of Modulated Blade Spacing for Main Rotors
NASA Technical Reports Server (NTRS)
Edwards, Bryan; Booth, Earl R., Jr. (Technical Monitor)
2002-01-01
Psychoacoustic testing of simulated helicopter main rotor noise is described, and the subjective results are presented. The objective of these tests was to evaluate the potential acoustic benefits of main rotors with modulated (uneven) blade spacing. Sound simulations were prepared for six main rotor configurations. A baseline 4-blade main rotor with regular blade spacing was based on the Bell Model 427 helicopter. A 5-blade main rotor with regular spacing was designed to approximate the performance of the 427, but at reduced tipspeed. Four modulated rotors - one with "optimum" spacing and three alternate configurations - were derived from the 5 bladed regular spacing rotor. The sounds were played to 2 subjects at a time, with care being taken in the speaker selection and placement to ensure that the sounds were identical for each subject. A total of 40 subjects participated. For each rotor configuration, the listeners were asked to evaluate the sounds in terms of noisiness. The test results indicate little to no "annoyance" benefit for the modulated blade spacing. In general, the subjects preferred the sound of the 5-blade regular spaced rotor over any of the modulated ones. A conclusion is that modulated blade spacing is not a promising design feature to reduce the annoyance for helicopter main rotors.
Optimizing the Distribution of Tie Points for the Bundle Adjustment of HRSC Image Mosaics
NASA Astrophysics Data System (ADS)
Bostelmann, J.; Breitkopf, U.; Heipke, C.
2017-07-01
For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC) regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs) and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the algorithm, shows a more homogenous distribution and is considerably smaller. Used for the block adjustment, it yields results of equal quality, with significantly shorter computation time. In this work, we present experiments with MC-30 half-tile blocks, which confirm our idea for reaching a stable and faster bundle adjustment. The described method is used for the systematic processing of HRSC data.
NASA Astrophysics Data System (ADS)
Liu, Yingyi; Zhou, Lijuan; Liu, Yuanqing; Yuan, Haiwen; Ji, Liang
2017-11-01
Audible noise is closely related to corona current on a high voltage direct current (HVDC) transmission line. In this paper, we measured a large amount of audible noise and corona current waveforms simultaneously based on the largest outdoor HVDC corona cage all over the world. By analyzing the experimental data, the related statistical regularities between a corona current spectrum and an audible noise spectrum were obtained. Furthermore, the generation mechanism of audible noise was analyzed theoretically, and the related mathematical expression between the audible noise spectrum and the corona current spectrum, which is suitable for all of these measuring points in the space, has been established based on the electro-acoustic conversion theory. Finally, combined with the obtained mathematical relation, the internal reasons for these statistical regularities appearing in measured corona current and audible noise data were explained. The results of this paper not only present the statistical association regularities between the corona current spectrum and the audible noise spectrum on a HVDC transmission line, but also reveal the inherent reasons of these associated rules.
An Onsager Singularity Theorem for Turbulent Solutions of Compressible Euler Equations
NASA Astrophysics Data System (ADS)
Drivas, Theodore D.; Eyink, Gregory L.
2017-12-01
We prove that bounded weak solutions of the compressible Euler equations will conserve thermodynamic entropy unless the solution fields have sufficiently low space-time Besov regularity. A quantity measuring kinetic energy cascade will also vanish for such Euler solutions, unless the same singularity conditions are satisfied. It is shown furthermore that strong limits of solutions of compressible Navier-Stokes equations that are bounded and exhibit anomalous dissipation are weak Euler solutions. These inviscid limit solutions have non-negative anomalous entropy production and kinetic energy dissipation, with both vanishing when solutions are above the critical degree of Besov regularity. Stationary, planar shocks in Euclidean space with an ideal-gas equation of state provide simple examples that satisfy the conditions of our theorems and which demonstrate sharpness of our L 3-based conditions. These conditions involve space-time Besov regularity, but we show that they are satisfied by Euler solutions that possess similar space regularity uniformly in time.
Bifurcation and Fractal of the Coupled Logistic Map
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Luo, Chao
The nature of the fixed points of the coupled Logistic map is researched, and the boundary equation of the first bifurcation of the coupled Logistic map in the parameter space is given out. Using the quantitative criterion and rule of system chaos, i.e., phase graph, bifurcation graph, power spectra, the computation of the fractal dimension, and the Lyapunov exponent, the paper reveals the general characteristics of the coupled Logistic map transforming from regularity to chaos, the following conclusions are shown: (1) chaotic patterns of the coupled Logistic map may emerge out of double-periodic bifurcation and Hopf bifurcation, respectively; (2) during the process of double-period bifurcation, the system exhibits self-similarity and scale transform invariability in both the parameter space and the phase space. From the research of the attraction basin and Mandelbrot-Julia set of the coupled Logistic map, the following conclusions are indicated: (1) the boundary between periodic and quasiperiodic regions is fractal, and that indicates the impossibility to predict the moving result of the points in the phase plane; (2) the structures of the Mandelbrot-Julia sets are determined by the control parameters, and their boundaries have the fractal characteristic.
Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images
NASA Astrophysics Data System (ADS)
Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.
2017-05-01
Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
Gambling scores for earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
One-dimensional QCD in thimble regularization
NASA Astrophysics Data System (ADS)
Di Renzo, F.; Eruzzi, G.
2018-01-01
QCD in 0 +1 dimensions is numerically solved via thimble regularization. In the context of this toy model, a general formalism is presented for S U (N ) theories. The sign problem that the theory displays is a genuine one, stemming from a (quark) chemical potential. Three stationary points are present in the original (real) domain of integration, so that contributions from all the thimbles associated to them are to be taken into account: we show how semiclassical computations can provide hints on the regions of parameter space where this is absolutely crucial. Known analytical results for the chiral condensate and the Polyakov loop are correctly reproduced: this is in particular trivial at high values of the number of flavors Nf. In this regime we notice that the single thimble dominance scenario takes place (the dominant thimble is the one associated to the identity). At low values of Nf computations can be more difficult. It is important to stress that this is not at all a consequence of the original sign problem (not even via the residual phase). The latter is always under control, while accidental, delicate cancelations of contributions coming from different thimbles can be in place in (restricted) regions of the parameter space.
Singha, Kamini; Gorelick, Steven M.
2006-01-01
Two important mechanisms affect our ability to estimate solute concentrations quantitatively from the inversion of field-scale electrical resistivity tomography (ERT) data: (1) the spatially variable physical processes that govern the flow of current as well as the variation of physical properties in space and (2) the overparameterization of inverse models, which requires the imposition of a smoothing constraint (regularization) to facilitate convergence of the inverse solution. Based on analyses of field and synthetic data, we find that the ability of ERT to recover the 3D shape and magnitudes of a migrating conductive target is spatially variable. Additionally, the application of Archie's law to tomograms from field ERT data produced solute concentrations that are consistently less than 10% of point measurements collected in the field and estimated from transport modeling. Estimates of concentration from ERT using Archie's law only fit measured solute concentrations if the apparent formation factor is varied with space and time and allowed to take on unreasonably high values. Our analysis suggests that the inability to find a single petrophysical relation in space and time between concentration and electrical resistivity is largely an effect of two properties of ERT surveys: (1) decreased sensitivity of ERT to detect the target plume with increasing distance from the electrodes and (2) the smoothing imprint of regularization used in inversion.
NASA Astrophysics Data System (ADS)
Kohri, Kazunori; Matsui, Hiroki
2017-08-01
In this work, we investigated the electroweak vacuum instability during or after inflation. In the inflationary Universe, i.e., de Sitter space, the vacuum field fluctuations < δ phi 2 > enlarge in proportion to the Hubble scale H2. Therefore, the large inflationary vacuum fluctuations of the Higgs field < δ phi 2 > are potentially catastrophic to trigger the vacuum transition to the negative-energy Planck-scale vacuum state and cause an immediate collapse of the Universe. However, the vacuum field fluctuations < δ phi 2 >, i.e., the vacuum expectation values have an ultraviolet divergence, and therefore a renormalization is necessary to estimate the physical effects of the vacuum transition. Thus, in this paper, we revisit the electroweak vacuum instability from the perspective of quantum field theory (QFT) in curved space-time, and discuss the dynamical behavior of the homogeneous Higgs field phi determined by the effective potential V eff( phi ) in curved space-time and the renormalized vacuum fluctuations < δ phi 2 >ren via adiabatic regularization and point-splitting regularization. We simply suppose that the Higgs field only couples the gravity via the non-minimal Higgs-gravity coupling ξ(μ). In this scenario, the electroweak vacuum stability is inevitably threatened by the dynamical behavior of the homogeneous Higgs field phi, or the formations of AdS domains or bubbles unless the Hubble scale is small enough H< ΛI .
Path description of coordinate-space amplitudes
NASA Astrophysics Data System (ADS)
Erdoǧan, Ozan; Sterman, George
2017-06-01
We develop a coordinate version of light-cone-ordered perturbation theory, for general time-ordered products of fields, by carrying out integrals over one light-cone coordinate for each interaction vertex. The resulting expressions depend on the lengths of paths, measured in the same light-cone coordinate. Each path is associated with a denominator equal to a "light-cone deficit," analogous to the "energy deficits" of momentum-space time- or light-cone-ordered perturbation theory. In effect, the role played by intermediate states in momentum space is played by paths between external fields in coordinate space. We derive a class of identities satisfied by coordinate diagrams, from which their imaginary parts can be derived. Using scalar QED as an example, we show how the eikonal approximation arises naturally when the external points in a Green function approach the light cone, and we give applications to products of Wilson lines. Although much of our discussion is directed at massless fields in four dimensions, we extend the formalism to massive fields and dimensional regularization.
NASA Astrophysics Data System (ADS)
Wang, Min
2017-06-01
This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.
Cluster: Mission Overview and End-of-Life Analysis
NASA Technical Reports Server (NTRS)
Pallaschke, S.; Munoz, I.; Rodriquez-Canabal, J.; Sieg, D.; Yde, J. J.
2007-01-01
The Cluster mission is part of the scientific programme of the European Space Agency (ESA) and its purpose is the analysis of the Earth's magnetosphere. The Cluster project consists of four satellites. The selected polar orbit has a shape of 4.0 and 19.2 Re which is required for performing measurements near the cusp and the tail of the magnetosphere. When crossing these regions the satellites form a constellation which in most of the cases so far has been a regular tetrahedron. The satellite operations are carried out by the European Space Operations Centre (ESOC) at Darmstadt, Germany. The paper outlines the future orbit evolution and the envisaged operations from a Flight Dynamics point of view. In addition a brief summary of the LEOP and routine operations is included beforehand.
A parabolic analogue of the higher-order comparison theorem of De Silva and Savin
NASA Astrophysics Data System (ADS)
Banerjee, Agnid; Garofalo, Nicola
2016-01-01
We show that the quotient of two caloric functions which vanish on a portion of the lateral boundary of a H k + α domain is H k + α up to the boundary for k ≥ 2. In the case k = 1, we show that the quotient is in H 1 + α if the domain is assumed to be space-time C 1 , α regular. This can be thought of as a parabolic analogue of a recent important result in [8], and we closely follow the ideas in that paper. We also give counterexamples to the fact that analogous results are not true at points on the parabolic boundary which are not on the lateral boundary, i.e., points which are at the corner and base of the parabolic boundary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.
Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less
Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...
2016-03-01
Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less
Chaotic Fluid Mixing in Crystalline Sphere Arrays
NASA Astrophysics Data System (ADS)
Turuban, R.; Lester, D. R.; Le Borgne, T.; Méheust, Y.
2017-12-01
We study the Lagrangian dynamics of steady 3D Stokes flow over simple cubic (SC) and body-centered cubic (BCC) lattices of close-packed spheres, and uncover the mechanisms governing chaotic mixing. Due to the cusp-shaped sphere contacts, the topology of the skin friction field is fundamentally different to that of continuous (non-granular) media (e.g. open pore networks), with significant implications for fluid mixing. Weak symmetry breaking of the flow orientation with respect to the lattice symmetries imparts a transition from regular to strong chaotic mixing in the BCC lattice, whereas the SC lattice only exhibits weak mixing. Whilst the SC and BCC lattices share the same symmetry point group, these differences are explained in terms of their space groups, and we find that a glide symmetry of the BCC lattice generates chaotic mixing. These insight are used to develop accurate predictions of the Lyapunov exponent distribution over the parameter space of mean flow orientation, and point to a general theory of mixing and dispersion based upon the inherent symmetries of arbitrary crystalline structures.
Imaging objects behind small obstacles using axicon lens
NASA Astrophysics Data System (ADS)
Perinchery, Sandeep M.; Shinde, Anant; Murukeshan, V. M.
2017-06-01
Axicon lenses are conical prisms, which are known to focus a light source to a line comprising of multiple points along the optical axis. In this study, we analyze the potential of axicon lenses to view, image and record the object behind opaque obstacles in free space. The advantage of an axicon lens over a regular lens is demonstrated experimentally. Parameters such as obstacle size, object and the obstacle position in the context of imaging behind obstacles are tested using Zemax optical simulation. This proposed concept can be easily adapted to most of the optical imaging methods and microscopy modalities.
NASA Astrophysics Data System (ADS)
Setare, M. R.; Sahraee, M.
2013-12-01
In this paper, we investigate the behavior of linearized gravitational excitation in the Born-Infeld gravity in AdS3 space. We obtain the linearized equation of motion and show that this higher-order gravity propagate two gravitons, massless and massive, on the AdS3 background. In contrast to the R2 models, such as TMG or NMG, Born-Infeld gravity does not have a critical point for any regular choice of parameters. So the logarithmic solution is not a solution of this model, due to this one cannot find a logarithmic conformal field theory as a dual model for Born-Infeld gravity.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
On dynamical systems approaches and methods in f ( R ) cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alho, Artur; Carloni, Sante; Uggla, Claes, E-mail: aalho@math.ist.utl.pt, E-mail: sante.carloni@tecnico.ulisboa.pt, E-mail: claes.uggla@kau.se
We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f ( R )-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f ( R ) = R + α R {sup 2}, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniquesmore » involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f ( R ) cosmology and discuss their advantages and disadvantages.« less
Specific characteristics of negative corona currents generated in short point-plane gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen; Zhang, Bo; He, Jinliang
The Trichel pulse is a typical kind of negative corona current observed in electronegative gases with a highly regular form. The characteristics of the Trichel pulse, such as the repetition frequency, the amplitude of each pulse, and the mean current, are dependent on different discharge conditions. Quite many scholars have studied the mean current and the current-voltage characteristic of Trichel pulses, yet the specific characteristics of the pulses have barely been investigated. In this paper, a series of experiments were carried out in a short point-to-plane discharge gap to investigate the detailed characteristics of Trichel pulses. After numerical fitting ofmore » the experiment results was performed, a new set of empirical formulas were derived to predict the specific characteristics of the negative corona current under different conditions. Different from existing literature, this paper uses as variables the average electric field intensity and the corona inception field intensity which is independent of the gap spacing in the empirical formulas. In the experiments, an inverse correlation between amplitude and repetition frequency of the pulses was observed. Based on the investigation of the remaining space charge in the discharge gap, this correlation is theoretically proved to be caused by the influence of space charges.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohri, Kazunori; Matsui, Hiroki, E-mail: kohri@post.kek.jp, E-mail: matshiro@post.kek.jp
In this work, we investigated the electroweak vacuum instability during or after inflation. In the inflationary Universe, i.e., de Sitter space, the vacuum field fluctuations < δ φ {sup 2} > enlarge in proportion to the Hubble scale H {sup 2}. Therefore, the large inflationary vacuum fluctuations of the Higgs field < δ φ {sup 2} > are potentially catastrophic to trigger the vacuum transition to the negative-energy Planck-scale vacuum state and cause an immediate collapse of the Universe. However, the vacuum field fluctuations < δ φ {sup 2} >, i.e., the vacuum expectation values have an ultraviolet divergence, andmore » therefore a renormalization is necessary to estimate the physical effects of the vacuum transition. Thus, in this paper, we revisit the electroweak vacuum instability from the perspective of quantum field theory (QFT) in curved space-time, and discuss the dynamical behavior of the homogeneous Higgs field φ determined by the effective potential V {sub eff}( φ ) in curved space-time and the renormalized vacuum fluctuations < δ φ {sup 2} >{sub ren} via adiabatic regularization and point-splitting regularization. We simply suppose that the Higgs field only couples the gravity via the non-minimal Higgs-gravity coupling ξ(μ). In this scenario, the electroweak vacuum stability is inevitably threatened by the dynamical behavior of the homogeneous Higgs field φ, or the formations of AdS domains or bubbles unless the Hubble scale is small enough H < Λ {sub I} .« less
Perchoux, Camille; Kestens, Yan; Brondeel, Ruben; Chaix, Basile
2015-12-01
Understanding how built environment characteristics influence recreational walking is of the utmost importance to develop population-level strategies to increase levels of physical activity in a sustainable manner. This study analyzes the residential and non-residential environmental correlates of recreational walking, using precisely geocoded activity space data. The point-based locations regularly visited by 4365 participants of the RECORD Cohort Study (Residential Environment and CORonary heart Disease) were collected between 2011 and 2013 in the Paris region using the VERITAS software (Visualization and Evaluation of Regular Individual Travel destinations and Activity Spaces). Zero-inflated negative binomial regressions were used to investigate associations between both residential and non-residential environmental exposure and overall self-reported recreational walking over 7 days. Density of destinations, presence of a lake or waterway, and neighborhood education were associated with an increase in the odds of reporting any recreational walking time. Only the density of destinations was associated with an increase in time spent walking for recreational purpose. Considering the recreational locations visited (i.e., sports and cultural destinations) in addition to the residential neighborhood in the calculation of exposure improved the model fit and increased the environment-walking associations, compared to a model accounting only for the residential space (Akaike Information Criterion equal to 52797 compared to 52815). Creating an environment supportive to walking around recreational locations may particularly stimulate recreational walking among people willing to use these facilities. Copyright © 2015 Elsevier Inc. All rights reserved.
Mestayer, Mac; Christo, Steve; Taylor, Mark
2014-10-21
A device and method for characterizing quality of a conducting surface. The device including a gaseous ionizing chamber having centrally located inside the chamber a conducting sample to be tested to which a negative potential is applied, a plurality of anode or "sense" wires spaced regularly about the central test wire, a plurality of "field wires" at a negative potential are spaced regularly around the sense, and a plurality of "guard wires" at a positive potential are spaced regularly around the field wires in the chamber. The method utilizing the device to measure emission currents from the conductor.
NASA Astrophysics Data System (ADS)
Yang, Hongxin; Su, Fulin
2018-01-01
We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.
[Protection value evaluation of national wetland parks in Hunan Province, China].
Wu, Hou Jian; Dan, Xin Qiu; Liu, Shi Hao; Huang, Yan; Shu, Yong; Cao, Hong; Wu, Zhao Bai
2017-01-01
This paper put forward an evaluation index system which included 5 aspects such as ecological location and representation, biodiversity, species rarity, naturality, scale and partition suitability as well as 15 indicators to assess the protection values of 60 national wetland parks in Hunan Province, China. Analytic hierarchy process (AHP) and entropy method were used in this evaluation index system. There were 37 national wetland parks (accounting for 61.7%) keeping high protection values with scores of greater than or equal to 67.64 points, and 12 national wetland parks (accounting for 20.0%) keeping very high protection values with scores of greater than or equal to 77.72 points. Although there was a discrete and rare regularity of the inter-annual variation, these values still showed a decreasing trend in general. From the space point of view, 70 points isogram divided the national wetland parks of Hunan Province into two high score areas and three high score points in the west and east area, and one low score area and four low score points in the middle. Ecological location, resource endowment and scale were the decisive factors for the conservation va-lues of national wetland parks in Hunan Province.
Point Clouds to Indoor/outdoor Accessibility Diagnosis
NASA Astrophysics Data System (ADS)
Balado, J.; Díaz-Vilariño, L.; Arias, P.; Garrido, I.
2017-09-01
This work presents an approach to automatically detect structural floor elements such as steps or ramps in the immediate environment of buildings, elements that may affect the accessibility to buildings. The methodology is based on Mobile Laser Scanner (MLS) point cloud and trajectory information. First, the street is segmented in stretches along the trajectory of the MLS to work in regular spaces. Next, the lower region of each stretch (the ground zone) is selected as the ROI and normal, curvature and tilt are calculated for each point. With this information, points in the ROI are classified in horizontal, inclined or vertical. Points are refined and grouped in structural elements using raster process and connected components in different phases for each type of previously classified points. At last, the trajectory data is used to distinguish between road and sidewalks. Adjacency information is used to classify structural elements in steps, ramps, curbs and curb-ramps. The methodology is tested in a real case study, consisting of 100 m of an urban street. Ground elements are correctly classified in an acceptable computation time. Steps and ramps also are exported to GIS software to enrich building models from Open Street Map with information about accessible/inaccessible entrances and their locations.
Shape regularized active contour based on dynamic programming for anatomical structure segmentation
NASA Astrophysics Data System (ADS)
Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra
2005-04-01
We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.
NASA Astrophysics Data System (ADS)
Vignati, F.; Guardone, A.
2017-11-01
An analytical model for the evolution of regular reflections of cylindrical converging shock waves over circular-arc obstacles is proposed. The model based on the new (local) parameter, the perceived wedge angle, which substitutes the (global) wedge angle of planar surfaces and accounts for the time-dependent curvature of both the shock and the obstacle at the reflection point, is introduced. The new model compares fairly well with numerical results. Results from numerical simulations of the regular to Mach transition—eventually occurring further downstream along the obstacle—point to the perceived wedge angle as the most significant parameter to identify regular to Mach transitions. Indeed, at the transition point, the value of the perceived wedge angle is between 39° and 42° for all investigated configurations, whereas, e.g., the absolute local wedge angle varies in between 10° and 45° in the same conditions.
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Modeling solvation effects in real-space and real-time within density functional approaches
NASA Astrophysics Data System (ADS)
Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea
2015-10-01
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.
Modeling solvation effects in real-space and real-time within density functional approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgado, Alain; Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana; Corni, Stefano
2015-10-14
The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that aremore » close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.« less
Quantum signature of chaos and thermalization in the kicked Dicke model
NASA Astrophysics Data System (ADS)
Ray, S.; Ghosh, A.; Sinha, S.
2016-09-01
We study the quantum dynamics of the kicked Dicke model (KDM) in terms of the Floquet operator, and we analyze the connection between chaos and thermalization in this context. The Hamiltonian map is constructed by suitably taking the classical limit of the Heisenberg equation of motion to study the corresponding phase-space dynamics, which shows a crossover from regular to chaotic motion by tuning the kicking strength. The fixed-point analysis and calculation of the Lyapunov exponent (LE) provide us with a complete picture of the onset of chaos in phase-space dynamics. We carry out a spectral analysis of the Floquet operator, which includes a calculation of the quasienergy spacing distribution and structural entropy to show the correspondence to the random matrix theory in the chaotic regime. Finally, we analyze the thermodynamics and statistical properties of the bosonic sector as well as the spin sector, and we discuss how such a periodically kicked system relaxes to a thermalized state in accordance with the laws of statistical mechanics.
Lin, Feng-Chang; Zhu, Jun
2012-01-01
We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.
Quantum signature of chaos and thermalization in the kicked Dicke model.
Ray, S; Ghosh, A; Sinha, S
2016-09-01
We study the quantum dynamics of the kicked Dicke model (KDM) in terms of the Floquet operator, and we analyze the connection between chaos and thermalization in this context. The Hamiltonian map is constructed by suitably taking the classical limit of the Heisenberg equation of motion to study the corresponding phase-space dynamics, which shows a crossover from regular to chaotic motion by tuning the kicking strength. The fixed-point analysis and calculation of the Lyapunov exponent (LE) provide us with a complete picture of the onset of chaos in phase-space dynamics. We carry out a spectral analysis of the Floquet operator, which includes a calculation of the quasienergy spacing distribution and structural entropy to show the correspondence to the random matrix theory in the chaotic regime. Finally, we analyze the thermodynamics and statistical properties of the bosonic sector as well as the spin sector, and we discuss how such a periodically kicked system relaxes to a thermalized state in accordance with the laws of statistical mechanics.
Morphology and spacing of river meander scrolls
NASA Astrophysics Data System (ADS)
Strick, Robert J. P.; Ashworth, Philip J.; Awcock, Graeme; Lewin, John
2018-06-01
Many of the world's alluvial rivers are characterised by single or multiple channels that are often sinuous and that migrate to produce a mosaicked floodplain landscape of truncated scroll (or point) bars. Surprisingly little is known about the morphology and geometry of scroll bars despite increasing interest from hydrocarbon geoscientists working with ancient large meandering river deposits. This paper uses remote sensing imagery, LiDAR data-sets of meandering scroll bar topography, and global coverage elevation data to quantify scroll bar geometry, anatomy, relief, and spacing. The analysis focuses on preserved scroll bars in the Mississippi River (USA) floodplain but also compares attributes to 19 rivers of different scale and depositional environments from around the world. Analysis of 10 large scroll bars (median area = 25 km2) on the Mississippi shows that the point bar deposits can be categorised into three different geomorphological units of increasing scale: individual 'scrolls', 'depositional packages', and 'point bar complexes'. Scroll heights and curvatures are greatest near the modern channel and at the terminating boundaries of different depositional packages, confirming the importance of the formative main channel on subsequent scroll bar relief and shape. Fourier analysis shows a periodic variation in signal (scroll bar height) with an average period (spacing) of 167 m (range 150-190 m) for the Mississippi point bars. For other rivers, a strong relationship exists between the period of scroll bars and the adjacent primary channel width for a range of rivers from 55 to 2042 mis 50% of the main channel width. The strength of this correlation over nearly two orders of magnitude of channel size indicates a scale independence of scroll bar spacing and suggests a strong link between channel migration and scroll bar construction with apparent regularities despite different flow regimes. This investigation of meandering river dynamics and floodplain patterns shows that it is possible to develop a suite of metrics that describe scroll bar morphology and geometry that can be valuable to geoscientists predicting the heterogeneity of subsurface meandering deposits.
NASA Astrophysics Data System (ADS)
Morgenthaler, George W.; Stodieck, Louis
1999-01-01
The International Space Station (ISS) is the linch-pin of NASA's future space plans. It emphasizes scientific research by providing a world-class scientific laboratory in which to perform long-term basic science experiments in the space environment of microgravity, radiation, vacuum, vantage-point, etc. It will serve as a test-bed for determining human system response to long-term space flight and for developing the life support equipment necessary for NASA's Human Exploration and Development of Space (HEDS) enterprise. The ISS will also provide facilities (up to 30% of the U.S. module) for testing material, agricultural, cellular, human, aquatic, and plant/animal systems to reveal phenomena heretofore shrouded by the veil of 1-g. These insights will improve life on Earth and will provide a commercial basis for new products and services. In fact, some products, e.g., rare metal-alloys, semiconductor chips, or protein crystals that cannot now be produced on Earth may be found to be sufficiently valuable to be manufactured on-orbit. Biotechnology, pharmaceutical and biomedical experiments have been regularly flown on 10-16 day Space Shuttle flights and on three-month Mir flights for basic science knowledge and for life support system and commercial product development. Since 1985, NASA has created several Commercial Space Centers (CSCs) for the express purpose of bringing university, government and industrial researchers together to utilize space flight and space technology to develop new industrial products and processes. BioServe Space Technologies at the University of Colorado at Boulder and Kansas State University, Manhattan, Kansas, is such a NASA sponsored CSC that has worked with over 65 companies and institutions in the Biotech Sector in the past 11 years and has successfully discovered and transferred new product and process information to its industry partners. While tests in the space environment have been limited to about two weeks on Shuttle or a few months on Mir, tests on ISS can be performed over many months, or even years. More importantly, a test can be regularly scheduled so that the effects of microgravity and other space environment parameters can be thoroughly researched and quantified. This paper attempts to envision the potential benefits of this soon-to-be-available orbital laboratory and the broad commercial utilization of ISS that will likely occur.
Identifying residential neighbourhood types from settlement points in a machine learning approach.
Jochem, Warren C; Bird, Tomas J; Tatem, Andrew J
2018-05-01
Remote sensing techniques are now commonly applied to map and monitor urban land uses to measure growth and to assist with development and planning. Recent work in this area has highlighted the use of textures and other spatial features that can be measured in very high spatial resolution imagery. Far less attention has been given to using geospatial vector data (i.e. points, lines, polygons) to map land uses. This paper presents an approach to distinguish residential settlement types (regular vs. irregular) using an existing database of settlement points locating structures. Nine data features describing the density, distance, angles, and spacing of the settlement points are calculated at multiple spatial scales. These data are analysed alone and with five common remote sensing measures on elevation, slope, vegetation, and nighttime lights in a supervised machine learning approach to classify land use areas. The method was tested in seven provinces of Afghanistan (Balkh, Helmand, Herat, Kabul, Kandahar, Kunduz, Nangarhar). Overall accuracy ranged from 78% in Kandahar to 90% in Nangarhar. This research demonstrates the potential to accurately map land uses from even the simplest representation of structures.
Lightning Simulation and Design Program (LSDP)
NASA Astrophysics Data System (ADS)
Smith, D. A.
This computer program simulates a user-defined lighting configuration. It has been developed as a tool to aid in the design of exterior lighting systems. Although this program is used primarily for perimeter security lighting design, it has potential use for any application where the light can be approximated by a point source. A data base of luminaire photometric information is maintained for use with this program. The user defines the surface area to be illuminated with a rectangular grid and specifies luminaire positions. Illumination values are calculated for regularly spaced points in that area and isolux contour plots are generated. The numerical and graphical output for a particular site mode are then available for analysis. The amount of time spent on point-to-point illumination computation with this progress is much less than that required for tedious hand calculations. The ease with which various parameters can be interactively modified with the progress also reduces the time and labor expended. Consequently, the feasibility of design ideas can be examined, modified, and retested more thoroughly, and overall design costs can be substantially lessened by using this progress as an adjunct to the design process.
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
Polarimetric image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Valenzuela, John R.
In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.
The rotation axis for stationary and axisymmetric space-times
NASA Astrophysics Data System (ADS)
van den Bergh, N.; Wils, P.
1985-03-01
A set of 'extended' regularity conditions is discussed which have to be satisfied on the rotation axis if the latter is assumed to be also an axis of symmetry. For a wide class of energy-momentum tensors these conditions can only hold at the origin of the Weyl canonical coordinate. For static and cylindrically symmetric space-times the conditions can be derived from the regularity of the Riemann tetrad coefficients on the axis. For stationary space-times, however, the extended conditions do not necessarily hold, even when 'elementary flatness' is satisfied and when there are no curvature singularities on the axis. The result by Davies and Caplan (1971) for cylindrically symmetric stationary Einstein-Maxwell fields is generalized by proving that only Minkowski space-time and a particular magnetostatic solution possess a regular axis of rotation. Further, several sets of solutions for neutral and charged, rigidly and differentially rotating dust are discussed.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Boudreau, Mathieu; Pike, G Bruce
2018-05-07
To develop and validate a regularization approach of optimizing B 1 insensitivity of the quantitative magnetization transfer (qMT) pool-size ratio (F). An expression describing the impact of B 1 inaccuracies on qMT fitting parameters was derived using a sensitivity analysis. To simultaneously optimize for robustness against noise and B 1 inaccuracies, the optimization condition was defined as the Cramér-Rao lower bound (CRLB) regularized by the B 1 -sensitivity expression for the parameter of interest (F). The qMT protocols were iteratively optimized from an initial search space, with and without B 1 regularization. Three 10-point qMT protocols (Uniform, CRLB, CRLB+B 1 regularization) were compared using Monte Carlo simulations for a wide range of conditions (e.g., SNR, B 1 inaccuracies, tissues). The B 1 -regularized CRLB optimization protocol resulted in the best robustness of F against B 1 errors, for a wide range of SNR and for both white matter and gray matter tissues. For SNR = 100, this protocol resulted in errors of less than 1% in mean F values for B 1 errors ranging between -10 and 20%, the range of B 1 values typically observed in vivo in the human head at field strengths of 3 T and less. Both CRLB-optimized protocols resulted in the lowest σ F values for all SNRs and did not increase in the presence of B 1 inaccuracies. This work demonstrates a regularized optimization approach for improving the robustness of auxiliary measurements (e.g., B 1 ) sensitivity of qMT parameters, particularly the pool-size ratio (F). Predicting substantially less B 1 sensitivity using protocols optimized with this method, B 1 mapping could even be omitted for qMT studies primarily interested in F. © 2018 International Society for Magnetic Resonance in Medicine.
The effect of dose reduction on the detection of anatomical structures on panoramic radiographs.
Kaeppler, G; Dietz, K; Reinert, S
2006-07-01
The aim was to evaluate the effect of dose reduction on diagnostic accuracy using different screen-film combinations and digital techniques for panoramic radiography. Five observers assessed 201 pairs of panoramic radiographs (a total of 402 panoramic radiographs) taken with the Orthophos Plus (Sirona, Bensheim, Germany), for visualization of 11 anatomical structures on each side, using a 3-point scale -1, 0 and 1. Two radiographs of each patient were taken at two different times (conventional setting and setting with decreased dose, done by increasing tube potential settings or halving tube current). To compare the dose at different tube potential settings dose-length product was measured at the secondary collimator. Films with medium and regular intensifying screens (high and low tube potential settings) and storage phosphor plates (low tube potential setting, tube current setting equivalent to regular intensifying screen and halved) were compared. The five observers made 27 610 assessments. Intrarater agreement was expressed by Cohen's kappa coefficient. The results demonstrated an equivalence of regular screens (low tube potential setting) and medium screens (high and low tube potential settings). A significant difference existed between medium screens (low tube potential setting, mean score 0.92) and the group of regular film-screen combinations at high tube potential settings (mean score 0.89) and between all film-screen combinations and the digital system irrespective of exposure (mean score below 0.82). There were no significant differences between medium and regular screens (mean score 0.88 to 0.92) for assessment of the periodontal ligament space, but there was a significant difference compared with the digital system (mean score below 0.76). The kappa coefficient for intrarater agreement was moderate (0.55). New regular intensifying screens can replace medium screens at low tube potential settings. Digital panoramic radiographs should be taken at low tube potential levels with an exposure equivalent at least to a regular intensifying screen.
Instationary Generalized Stokes Equations in Partially Periodic Domains
NASA Astrophysics Data System (ADS)
Sauer, Jonas
2018-06-01
We consider an instationary generalized Stokes system with nonhomogeneous divergence data under a periodic condition in only some directions. The problem is set in the whole space, the half space or in (after an identification of the periodic directions with a torus) bounded domains with sufficiently regular boundary. We show unique solvability for all times in Muckenhoupt weighted Lebesgue spaces. The divergence condition is dealt with by analyzing the associated reduced Stokes system and in particular by showing maximal regularity of the partially periodic reduced Stokes operator.
Enumeration of Extended m-Regular Linear Stacks.
Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian
2016-12-01
The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].
Does chaos assist localization or delocalization?
Tan, Jintao; Lu, Gengbiao; Luo, Yunrong; Hai, Wenhua
2014-12-01
We aim at a long-standing contradiction between chaos-assisted tunneling and chaos-related localization study quantum transport of a single particle held in an amplitude-modulated and tilted optical lattice. We find some near-resonant regions crossing chaotic and regular regions in the parameter space, and demonstrate that chaos can heighten velocity of delocalization in the chaos-resonance overlapping regions, while chaos may aid localization in the other chaotic regions. The degree of localization enhances with increasing the distance between parameter points and near-resonant regions. The results could be useful for experimentally manipulating chaos-assisted transport of single particles in optical or solid-state lattices.
A method for data handling numerical results in parallel OpenFOAM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Alin; Muntean, Sebastian
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Digital SAR processing using a fast polynomial transform
NASA Technical Reports Server (NTRS)
Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.
1981-01-01
A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.
Plate falling in a fluid: Regular and chaotic dynamics of finite-dimensional models
NASA Astrophysics Data System (ADS)
Kuznetsov, Sergey P.
2015-05-01
Results are reviewed concerning the planar problem of a plate falling in a resisting medium studied with models based on ordinary differential equations for a small number of dynamical variables. A unified model is introduced to conduct a comparative analysis of the dynamical behaviors of models of Kozlov, Tanabe-Kaneko, Belmonte-Eisenberg-Moses and Andersen-Pesavento-Wang using common dimensionless variables and parameters. It is shown that the overall structure of the parameter spaces for the different models manifests certain similarities caused by the same inherent symmetry and by the universal nature of the phenomena involved in nonlinear dynamics (fixed points, limit cycles, attractors, and bifurcations).
Paparo, M.; Benko, J. M.; Hareter, M.; ...
2016-06-17
A sequence search method was developed for searching for regular frequency spacing in δ Scuti stars by visual inspection (VI) and algorithmic search. The sample contains 90 δ Scuti stars observed by CoRoT. An example is given to represent the VI. The algorithm (SSA) is described in detail. The data treatment of the CoRoT light curves, the criteria for frequency filtering, and the spacings derived by two methods (i.e., three approaches: VI, SSA, and FT) are given for each target. Echelle diagrams are presented for 77 targets for which at least one sequence of regular spacing was identified. Comparing the spacing and the shifts between pairs of echelle ridges revealed that at least one pair of echelle ridges is shifted to midway between the spacing for 22 stars. The estimated rotational frequencies compared to the shifts revealed rotationally split doublets, triplets, and multiplets not only for single frequencies, but for the complete echelle ridges in 31 δ Scuti stars. Furthermore, using several possible assumptions for the origin of the spacings, we derived the large separation (more » $${\\rm{\\Delta }}\
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paparo, M.; Benko, J. M.; Hareter, M.
A sequence search method was developed for searching for regular frequency spacing in δ Scuti stars by visual inspection (VI) and algorithmic search. The sample contains 90 δ Scuti stars observed by CoRoT. An example is given to represent the VI. The algorithm (SSA) is described in detail. The data treatment of the CoRoT light curves, the criteria for frequency filtering, and the spacings derived by two methods (i.e., three approaches: VI, SSA, and FT) are given for each target. Echelle diagrams are presented for 77 targets for which at least one sequence of regular spacing was identified. Comparing the spacing and the shifts between pairs of echelle ridges revealed that at least one pair of echelle ridges is shifted to midway between the spacing for 22 stars. The estimated rotational frequencies compared to the shifts revealed rotationally split doublets, triplets, and multiplets not only for single frequencies, but for the complete echelle ridges in 31 δ Scuti stars. Furthermore, using several possible assumptions for the origin of the spacings, we derived the large separation (more » $${\\rm{\\Delta }}\
Stability Properties of the Regular Set for the Navier-Stokes Equation
NASA Astrophysics Data System (ADS)
D'Ancona, Piero; Lucà, Renato
2018-06-01
We investigate the size of the regular set for small perturbations of some classes of strong large solutions to the Navier-Stokes equation. We consider perturbations of the data that are small in suitable weighted L2 spaces but can be arbitrarily large in any translation invariant Banach space. We give similar results in the small data setting.
UAV photogrammetry for topographic monitoring of coastal areas
NASA Astrophysics Data System (ADS)
Gonçalves, J. A.; Henriques, R.
2015-06-01
Coastal areas suffer degradation due to the action of the sea and other natural and human-induced causes. Topographical changes in beaches and sand dunes need to be assessed, both after severe events and on a regular basis, to build models that can predict the evolution of these natural environments. This is an important application for airborne LIDAR, and conventional photogrammetry is also being used for regular monitoring programs of sensitive coastal areas. This paper analyses the use of unmanned aerial vehicles (UAV) to map and monitor sand dunes and beaches. A very light plane (SwingletCam) equipped with a very cheap, non-metric camera was used to acquire images with ground resolutions better than 5 cm. The Agisoft Photoscan software was used to orientate the images, extract point clouds, build a digital surface model and produce orthoimage mosaics. The processing, which includes automatic aerial triangulation with camera calibration and subsequent model generation, was mostly automated. To achieve the best positional accuracy for the whole process, signalised ground control points were surveyed with a differential GPS receiver. Two very sensitive test areas on the Portuguese northwest coast were analysed. Detailed DSMs were obtained with 10 cm grid spacing and vertical accuracy (RMS) ranging from 3.5 to 5.0 cm, which is very similar to the image ground resolution (3.2-4.5 cm). Where possible to assess, the planimetric accuracy of the orthoimage mosaics was found to be subpixel. Within the regular coastal monitoring programme being carried out in the region, UAVs can replace many of the conventional flights, with considerable gains in the cost of the data acquisition and without any loss in the quality of topographic and aerial imagery data.
A (very) Simple Model for the Aspect Ratio of High-Order River Basins
NASA Astrophysics Data System (ADS)
Shelef, E.
2017-12-01
The structure of river networks dictates the distribution of elevation, water, and sediments across Earth's surface. Despite its intricate shape, the structure of high-order river networks displays some surprising regularities such as the consistent aspect ratio (i.e., basin's width over length) of river basins along linear mountain fronts. This ratio controls the spacing between high-order channels as well as the spacing between the depositional bodies they form. It is generally independent of tectonic and climatic conditions and is often attributed to the initial topography over which the network was formed. This study shows that a simple, cross-like channel model explains this ratio via a requirement for equal elevation gain between the outlets and drainage-divides of adjacent channels at topographic steady state. This model also explains the dependence of aspect ratio on channel concavity and the location of the widest point on a drainage divide.
Maximum entropy deconvolution of the optical jet of 3C 273
NASA Technical Reports Server (NTRS)
Evans, I. N.; Ford, H. C.; Hui, X.
1989-01-01
The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.
NIFTY - Numerical Information Field Theory. A versatile PYTHON library for signal inference
NASA Astrophysics Data System (ADS)
Selig, M.; Bell, M. R.; Junklewitz, H.; Oppermann, N.; Reinecke, M.; Greiner, M.; Pachajoa, C.; Enßlin, T. A.
2013-06-01
NIFTy (Numerical Information Field Theory) is a software package designed to enable the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTy offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically without concerning the user. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTy permits its user to rapidly prototype algorithms in 1D, and then apply the developed code in higher-dimensional settings of real world problems. The set of spaces on which NIFTy operates comprises point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those. The functionality and diversity of the package is demonstrated by a Wiener filter code example that successfully runs without modification regardless of the space on which the inference problem is defined. NIFTy homepage http://www.mpa-garching.mpg.de/ift/nifty/; Excerpts of this paper are part of the NIFTy source code and documentation.
Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung
2005-05-01
An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.
Cheung, N Wt; Cheung, Y W; Chen, X
2016-06-01
To examine the effects of a permissive attitude towards regular and occasional drug use, life satisfaction, self-esteem, depression, and other psychosocial variables in the drug use of psychoactive drug users. Psychosocial factors that might affect a permissive attitude towards regular / occasional drug use and life satisfaction were further explored. We analysed data of a sample of psychoactive drug users from a longitudinal survey of psychoactive drug abusers in Hong Kong who were interviewed at 6 time points at 6-month intervals between January 2009 and December 2011. Data of the second to the sixth time points were stacked into an individual time point structure. Random-effects probit regression analysis was performed to estimate the relative contribution of the independent variables to the binary dependent variable of drug use in the last 30 days. A permissive attitude towards drug use, life satisfaction, and depression at the concurrent time point, and self-esteem at the previous time point had direct effects on drug use in the last 30 days. Interestingly, permissiveness to occasional drug use was a stronger predictor of drug use than permissiveness to regular drug use. These 2 permissive attitude variables were affected by the belief that doing extreme things shows the vitality of young people (at concurrent time point), life satisfaction (at concurrent time point), and self-esteem (at concurrent and previous time points). Life satisfaction was affected by sense of uncertainty about the future (at concurrent time point), self-esteem (at concurrent time point), depression (at both concurrent and previous time points), and being stricken by stressful events (at previous time point). A number of psychosocial factors could affect the continuation or discontinuation of drug use, as well as the permissive attitude towards regular and occasional drug use, and life satisfaction. Implications of the findings for prevention and intervention work targeted at psychoactive drug users are discussed.
When the Sun Gets in the Way: Stereo Science Observations on the Far Side of the Sun
NASA Astrophysics Data System (ADS)
Vourlidas, A.; Thompson, W. T.; Gurman, J. B.; Luhmann, J. G.; Curtis, D. W.; Schroeder, P. C.; Mewaldt, R. A.; Davis, A. J.; Wortman, K.; Russell, C. T.; Galvin, A. B.; Popecki, M.; Kistler, L. M.; Ellis, L.; Howard, R.; Rich, N.; Hutting, L.; Maksimovic, M.; Bale, S. D.; Goetz, K.
2014-12-01
With the two STEREO spacecraft on the opposite side of the Sun from Earth, pointing the high gain antenna at Earth means that it's also pointed very close to the Sun. This has resulted in unexpectedly high temperatures in the antenna feed horns on both spacecraft, and is forcing the mission operations team to take corrective action, starting in August 2014 for STEREO Ahead, and December 2014 for STEREO Behind. By off-pointing the antennas to use one of the lower power side lobes instead of the main lobe, the feed horn temperatures can be kept at a safe level while still allowing reliable communication with the spacecraft. However, the amount of telemetry that can be brought down will be highly reduced. Even so, significant science will still be possible from STEREO's unique position on the solar far side. We will discuss the science and space weather products that will be available from each STEREO instrument, when those products will be available, and how they will be used. Some data, including the regular space weather beacon products, will be brought down for an average of a few hours each day during the daily real-time passes, while the in situ and radio beacon data will be stored on the onboard recorder to provide a continuous 24-hour coverage for eventual downlink once the spacecraft is back to normal operations.
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.
2014-01-01
The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.
Generic effective source for scalar self-force calculations
NASA Astrophysics Data System (ADS)
Wardell, Barry; Vega, Ian; Thornburg, Jonathan; Diener, Peter
2012-05-01
A leading approach to the modeling of extreme mass ratio inspirals involves the treatment of the smaller mass as a point particle and the computation of a regularized self-force acting on that particle. In turn, this computation requires knowledge of the regularized retarded field generated by the particle. A direct calculation of this regularized field may be achieved by replacing the point particle with an effective source and solving directly a wave equation for the regularized field. This has the advantage that all quantities are finite and require no further regularization. In this work, we present a method for computing an effective source which is finite and continuous everywhere, and which is valid for a scalar point particle in arbitrary geodesic motion in an arbitrary background spacetime. We explain in detail various technical and practical considerations that underlie its use in several numerical self-force calculations. We consider as examples the cases of a particle in a circular orbit about Schwarzschild and Kerr black holes, and also the case of a particle following a generic timelike geodesic about a highly spinning Kerr black hole. We provide numerical C code for computing an effective source for various orbital configurations about Schwarzschild and Kerr black holes.
2015-02-10
ISS042E236075 (02/10/2015) --- Astronauts in space must exercise regularly to keep muscles from deteriorating. The busy schedule aboard the International Space Station has these regular periods worked in as NASA astronaut Terry Virts shows in this Tweet he sent out on Feb. 10, 2015 with the comment: "Periodic Fitness Evaluation- riding the bike with a heart rate monitor, EKG, and blood pressure machine hooked up".
The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods
NASA Astrophysics Data System (ADS)
Ginibre, J.; Velo, G.
We continue the study of the initial value problem for the complex Ginzburg-Landau equation
Generalized Bregman distances and convergence rates for non-convex regularization methods
NASA Astrophysics Data System (ADS)
Grasmair, Markus
2010-11-01
We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.
Space Radar Image of Kennedy Space Center, Florida
1999-06-25
This image was produced during radar observations taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar as it flew over the Gulf Stream, Florida, and past the Atlantic Ocean on October 7, 1994. The data were produced using the X-band radar frequency. Knowing ahead of time that this region would be included in a regularly scheduled radar pass, the Kennedy Space Center team, who assembled and integrated the SIR-C/X-SAR equipment with the Spacelab pallet system, designed a set of radar reflectors from common construction materials and formed the letters "KSC" on the ground adjacent to the main headquarters building at the entrance to the Cape Canaveral launch facility. The point of light formed by the bright return from these reflectors are visible in the image. Other more diffuse bright spots are reflections from building faces, roofs and other large structures at the Kennedy Space Center complex. This frame covers an area of approximately 6 kilometers by 8 kilometers (4 miles by 5 miles), which was just a small portion of the data taken on this particular pass. http://photojournal.jpl.nasa.gov/catalog/PIA01747
NASA Astrophysics Data System (ADS)
Phillips, Nicholas G.; Hu, B. L.
2000-10-01
We present calculations of the variance of fluctuations and of the mean of the energy momentum tensor of a massless scalar field for the Minkowski and Casimir vacua as a function of an intrinsic scale defined by a smeared field or by point separation. We point out that, contrary to prior claims, the ratio of variance to mean-squared being of the order unity is not necessarily a good criterion for measuring the invalidity of semiclassical gravity. For the Casimir topology we obtain expressions for the variance to mean-squared ratio as a function of the intrinsic scale (defined by a smeared field) compared to the extrinsic scale (defined by the separation of the plates, or the periodicity of space). Our results make it possible to identify the spatial extent where negative energy density prevails which could be useful for studying quantum field effects in worm holes and baby universes, and for examining the design feasibility of real-life ``time machines.'' For the Minkowski vacuum we find that the ratio of the variance to the mean-squared, calculated from the coincidence limit, is identical to the value of the Casimir case at the same limit for spatial point separation while identical to the value of a hot flat space result with a temporal point separation. We analyze the origin of divergences in the fluctuations of the energy density and discuss choices in formulating a procedure for their removal, thus raising new questions about the uniqueness and even the very meaning of regularization of the energy momentum tensor for quantum fields in curved or even flat spacetimes when spacetime is viewed as having an extended structure.
Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data
Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho
2017-01-01
With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models. PMID:28335486
Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data.
Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho
2017-03-19
With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark datasets. The results show that the proposed method can robustly produce accurate regularized 3D building rooftop models.
Chu, C Y; Jiang, X; Jinnai, H; Pei, R Y; Lin, W F; Tsai, J C; Chen, H L
2015-03-14
The ordered bicontinuous double diamond (OBDD) structure has long been believed to be an unstable ordered network nanostructure, which is relative to the ordered bicontinuous double gyroid (OBDG) structure for diblock copolymers. Using electron tomography, we present the first real-space observation of the thermodynamically stable OBDD structure in a diblock copolymer composed of a stereoregular block, syndiotactic polypropylene-block-polystyrene (sPP-b-PS), in which the sPP tetrapods are interconnected via a bicontinuous network with Pn3̄m symmetry. The OBDD structure underwent a thermally reversible order-order transition (OOT) to OBDG upon heating, and the transition was accompanied with a slight reduction of domain spacing, as demonstrated both experimentally and theoretically. The thermodynamic stability of the OBDD structure was attributed to the ability of the configurationally regular sPP block to form helical segments, even above its melting point, as the reduction of internal energy associated with the helix formation may effectively compensate the greater packing frustration in OBDD relative to that in the tripods of OBDG.
An inverse problem for Gibbs fields with hard core potential
NASA Astrophysics Data System (ADS)
Koralov, Leonid
2007-05-01
It is well known that for a regular stable potential of pair interaction and a small value of activity one can define the corresponding Gibbs field (a measure on the space of configurations of points in Rd). In this paper we consider a converse problem. Namely, we show that for a sufficiently small constant ρ¯1 and a sufficiently small function ρ¯2(x), x ∈Rd, that is equal to zero in a neighborhood of the origin, there exist a hard core pair potential and a value of activity such that ρ¯1 is the density and ρ¯2 is the pair correlation function of the corresponding Gibbs field.
Digital SAR processing using a fast polynomial transform
NASA Technical Reports Server (NTRS)
Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.
1984-01-01
A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295
Belk, John W; Marshall, Hayden A; McCarty, Eric C; Kraeutler, Matthew J
2017-10-01
There has been speculation that rest during the regular season for players in the National Basketball Association (NBA) improves player performance in the postseason. To determine whether there is a correlation between the amount of regular-season rest among NBA players and playoff performance and injury risk in the same season. Cohort study; Level of evidence, 3. The Basketball Reference and Pro Sports Transactions archives were searched from the 2005 to 2015 seasons. Data were collected on players who missed fewer than 5 regular-season games because of rest (group A) and 5 to 9 regular-season games because of rest (group B) during each season. Inclusion criteria consisted of players who played a minimum of 20 minutes per game and made the playoffs that season. Players were excluded if they missed ≥10 games because of rest or suspension or missed ≥20 games in a season for any reason. Matched pairs were formed between the groups based on the following criteria: position, mean age at the start of the season within 2 years, regular-season minutes per game within 5 minutes, same playoff seeding, and player efficiency rating (PER) within 2 points. The following data from the playoffs were collected and compared between matched pairs at each position (point guard, shooting guard, forward/center): points per game, assists per game, PER, true shooting percentage, blocks, steals, and number of playoff games missed because of injury. A total of 811 players met the inclusion and exclusion criteria (group A: n = 744 players; group B: n = 67 players). Among all eligible players, 27 matched pairs were formed. Within these matched pairs, players in group B missed significantly more regular-season games because of rest than players in group A (6.0 games vs 1.3 games, respectively; P < .0001). There were no significant differences between the groups at any position in terms of points per game, assists per game, PER, true shooting percentage, blocks, steals, or number of playoff games missed because of injury. Rest during the NBA regular season does not improve playoff performance or affect the injury risk during the playoffs in the same season.
An approach for the regularization of a power flow solution around the maximum loading point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kataoka, Y.
1992-08-01
In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less
Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields
NASA Astrophysics Data System (ADS)
Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.
1992-12-01
During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards regularity. For clouds less than 1 km in diameter, the average nearest-neighbor distance is equal to 3-7 cloud diameters. For larger clouds, the ratio of cloud nearest-neighbor distance to cloud diameter increases sharply with increasing cloud diameter. This demonstrates that large clouds inhibit the growth of other large clouds in their vicinity. Nevertheless, this leads to random distributions of large clouds, not regularity.
q-Space Upsampling Using x-q Space Regularization.
Chen, Geng; Dong, Bin; Zhang, Yong; Shen, Dinggang; Yap, Pew-Thian
2017-09-01
Acquisition time in diffusion MRI increases with the number of diffusion-weighted images that need to be acquired. Particularly in clinical settings, scan time is limited and only a sparse coverage of the vast q -space is possible. In this paper, we show how non-local self-similar information in the x - q space of diffusion MRI data can be harnessed for q -space upsampling. More specifically, we establish the relationships between signal measurements in x - q space using a patch matching mechanism that caters to unstructured data. We then encode these relationships in a graph and use it to regularize an inverse problem associated with recovering a high q -space resolution dataset from its low-resolution counterpart. Experimental results indicate that the high-resolution datasets reconstructed using the proposed method exhibit greater quality, both quantitatively and qualitatively, than those obtained using conventional methods, such as interpolation using spherical radial basis functions (SRBFs).
Lin, Tungyou; Guyader, Carole Le; Dinov, Ivo; Thompson, Paul; Toga, Arthur; Vese, Luminita
2013-01-01
This paper proposes a numerical algorithm for image registration using energy minimization and nonlinear elasticity regularization. Application to the registration of gene expression data to a neuroanatomical mouse atlas in two dimensions is shown. We apply a nonlinear elasticity regularization to allow larger and smoother deformations, and further enforce optimality constraints on the landmark points distance for better feature matching. To overcome the difficulty of minimizing the nonlinear elasticity functional due to the nonlinearity in the derivatives of the displacement vector field, we introduce a matrix variable to approximate the Jacobian matrix and solve for the simplified Euler-Lagrange equations. By comparison with image registration using linear regularization, experimental results show that the proposed nonlinear elasticity model also needs fewer numerical corrections such as regridding steps for binary image registration, it renders better ground truth, and produces larger mutual information; most importantly, the landmark points distance and L2 dissimilarity measure between the gene expression data and corresponding mouse atlas are smaller compared with the registration model with biharmonic regularization. PMID:24273381
NASA Astrophysics Data System (ADS)
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking
Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun
2017-01-01
Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev
2017-02-01
Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
Optimal boundary regularity for a singular Monge-Ampère equation
NASA Astrophysics Data System (ADS)
Jian, Huaiyu; Li, You
2018-06-01
In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.
2003-02-04
KENNEDY SPACE CENTER, FLA. -- United Space Alliance (USA) technicians install thermal protection system tiles on Space Shuttle Discovery. Discovery is undergoing its Orbiter Major Modification Period, a regularly scheduled structural inspection and modification downtime, which began in September 2002. .
ERIC Educational Resources Information Center
Lynch, Christopher O.
2010-01-01
This article presents a classroom activity that introduces students to the concept of themed space. Students learn to think critically about the spaces they encounter on a regular basis by analyzing existing spaces and by working in groups to create their own themed space. This exercise gives students the chance to see the relevance of critical…
The Volume of the Regular Octahedron
ERIC Educational Resources Information Center
Trigg, Charles W.
1974-01-01
Five methods are given for computing the area of a regular octahedron. It is suggested that students first construct an octahedron as this will aid in space visualization. Six further extensions are left for the reader to try. (LS)
Pronunciation difficulty, temporal regularity, and the speech-to-song illusion.
Margulis, Elizabeth H; Simchy-Gross, Rhimmon; Black, Justin L
2015-01-01
The speech-to-song illusion (Deutsch et al., 2011) tracks the perceptual transformation from speech to song across repetitions of a brief spoken utterance. Because it involves no change in the stimulus itself, but a dramatic change in its perceived affiliation to speech or to music, it presents a unique opportunity to comparatively investigate the processing of language and music. In this study, native English-speaking participants were presented with brief spoken utterances that were subsequently repeated ten times. The utterances were drawn either from languages that are relatively difficult for a native English speaker to pronounce, or languages that are relatively easy for a native English speaker to pronounce. Moreover, the repetition could occur at regular or irregular temporal intervals. Participants rated the utterances before and after the repetitions on a 5-point Likert-like scale ranging from "sounds exactly like speech" to "sounds exactly like singing." The difference in ratings before and after was taken as a measure of the strength of the speech-to-song illusion in each case. The speech-to-song illusion occurred regardless of whether the repetitions were spaced at regular temporal intervals or not; however, it occurred more readily if the utterance was spoken in a language difficult for a native English speaker to pronounce. Speech circuitry seemed more liable to capture native and easy-to-pronounce languages, and more reluctant to relinquish them to perceived song across repetitions.
Expanding space-time and variable vacuum energy
NASA Astrophysics Data System (ADS)
Parmeggiani, Claudio
2017-08-01
The paper describes a cosmological model which contemplates the presence of a vacuum energy varying, very slightly (now), with time. The constant part of the vacuum energy generated, some 6 Gyr ago, a deceleration/acceleration transition of the metric expansion; so now, in an aged Universe, the expansion is inexorably accelerating. The vacuum energy varying part is instead assumed to be eventually responsible of an acceleration/deceleration transition, which occurred about 14 Gyr ago; this transition has a dynamic origin: it is a consequence of the general relativistic Einstein-Friedmann equations. Moreover, the vacuum energy (constant and variable) is here related to the zero-point energy of some quantum fields (scalar, vector, or spinor); these fields are necessarily described in a general relativistic way: their structure depends on the space-time metric, typically non-flat. More precisely, the commutators of the (quantum field) creation/annihilation operators are here assumed to depend on the local value of the space-time metric tensor (and eventually of its curvature); furthermore, these commutators rapidly decrease for high momentum values and they reduce to the standard ones for a flat metric. In this way, the theory is ”gravitationally” regularized; in particular, the zero-point (vacuum) energy density has a well defined value and, for a non static metric, depends on the (cosmic) time. Note that this varying vacuum energy can be negative (Fermi fields) and that a change of its sign typically leads to a minimum for the metric expansion factor (a ”bounce”).
A characterization of linearly repetitive cut and project sets
NASA Astrophysics Data System (ADS)
Haynes, Alan; Koivusalo, Henna; Walton, James
2018-02-01
For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.
Applications of the gambling score in evaluating earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2010-05-01
This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.
NASA Astrophysics Data System (ADS)
Sadhukhan, B.; Nayak, A.; Mookerjee, A.
2017-12-01
In this communication we present together four distinct techniques for the study of electronic structure of solids: the tight-binding linear muffin-tin orbitals, the real space and augmented space recursions and the modified exchange-correlation. Using this we investigate the effect of random vacancies on the electronic properties of the carbon hexagonal allotrope, graphene, and the non-hexagonal allotrope, planar T graphene. We have inserted random vacancies at different concentrations, to simulate disorder in pristine graphene and planar T graphene sheets. The resulting disorder, both on-site (diagonal disorder) as well as in the hopping integrals (off-diagonal disorder), introduces sharp peaks in the vicinity of the Dirac point built up from localized states for both hexagonal and non-hexagonal structures. These peaks become resonances with increasing vacancy concentration. We find that in presence of vacancies, graphene-like linear dispersion appears in planar T graphene and the cross points form a loop in the first Brillouin zone similar to buckled T graphene that originates from π and π* bands without regular hexagonal symmetry. We also calculate the single-particle relaxation time, τ (ěc {q}) of ěc {q} labeled quantum electronic states which originates from scattering due to presence of vacancies, causing quantum level broadening.
75 FR 7480 - Farm Credit Administration Board; Sunshine Act; Regular Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-19
... weather in the Washington DC metropolitan area. Date and Time: The regular meeting of the Board will now... INFORMATION: This meeting of the Board will be open to the public (limited space available). In order to...
Scattering theory for graphs isomorphic to a regular tree at infinity
NASA Astrophysics Data System (ADS)
Colin de Verdière, Yves; Truc, Françoise
2013-06-01
We describe the spectral theory of the adjacency operator of a graph which is isomorphic to a regular tree at infinity. Using some combinatorics, we reduce the problem to a scattering problem for a finite rank perturbation of the adjacency operator on a regular tree. We develop this scattering theory using the classical recipes for Schrödinger operators in Euclidian spaces.
A novel blinding digital watermark algorithm based on lab color space
NASA Astrophysics Data System (ADS)
Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao
2010-02-01
It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.
Reintjes, Moritz; Temple, Blake
2015-05-08
We give a constructive proof that coordinate transformations exist which raise the regularity of the gravitational metric tensor from C 0,1 to C 1,1 in a neighbourhood of points of shock wave collision in general relativity. The proof applies to collisions between shock waves coming from different characteristic families, in spherically symmetric spacetimes. Our result here implies that spacetime is locally inertial and corrects an error in our earlier Proc. R. Soc. A publication, which led us to the false conclusion that such coordinate transformations, which smooth the metric to C 1,1 , cannot exist. Thus, our result implies that regularity singularities (a type of mild singularity introduced in our Proc. R. Soc. A paper) do not exist at points of interacting shock waves from different families in spherically symmetric spacetimes. Our result generalizes Israel's celebrated 1966 paper to the case of such shock wave interactions but our proof strategy differs fundamentally from that used by Israel and is an extension of the strategy outlined in our original Proc. R. Soc. A publication. Whether regularity singularities exist in more complicated shock wave solutions of the Einstein-Euler equations remains open.
Reintjes, Moritz; Temple, Blake
2015-01-01
We give a constructive proof that coordinate transformations exist which raise the regularity of the gravitational metric tensor from C0,1 to C1,1 in a neighbourhood of points of shock wave collision in general relativity. The proof applies to collisions between shock waves coming from different characteristic families, in spherically symmetric spacetimes. Our result here implies that spacetime is locally inertial and corrects an error in our earlier Proc. R. Soc. A publication, which led us to the false conclusion that such coordinate transformations, which smooth the metric to C1,1, cannot exist. Thus, our result implies that regularity singularities (a type of mild singularity introduced in our Proc. R. Soc. A paper) do not exist at points of interacting shock waves from different families in spherically symmetric spacetimes. Our result generalizes Israel's celebrated 1966 paper to the case of such shock wave interactions but our proof strategy differs fundamentally from that used by Israel and is an extension of the strategy outlined in our original Proc. R. Soc. A publication. Whether regularity singularities exist in more complicated shock wave solutions of the Einstein–Euler equations remains open. PMID:27547092
New Ways of Treating Data for Diatomic Molecule 'shelf' and Double-Minimum States
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Tao, Jason; Khanna, Shirin; Pashov, Asen; Tellinghuisen, Joel
2017-06-01
Electronic states whose potential energy functions have 'shelf' or double-minimum shapes have always presented special challenges because, as functions of vibrational quantum number, the vibrational energies/spacings and inertial rotational constants either have an abrupt change of character with discontinuous slope, or past a given point, become completely chaotic. The present work shows that a `traditional' methodology developed for deep `regular' single-well potentials can also provide accurate `parameter-fit' descriptions of the v-dependence of the vibrational energies and rotational constants of shelf-state potentials that allow a conventional RKR calculation of their Potential energy functions. It is also shown that a merging of Pashov's uniquely flexible 'spline point-wise' potential function representation with Le Roy's `Morse/Long-Range' (MLR) analytic functional form which automatically incorporates the correct theoretically known long-range form, yields an analytic function that incorporates most of the advantages of both approaches. An illustrative application of this method to data to a double-minimum state of Na_2 will be described.
Particle Creation at a Point Source by Means of Interior-Boundary Conditions
NASA Astrophysics Data System (ADS)
Lampart, Jonas; Schmidt, Julian; Teufel, Stefan; Tumulka, Roderich
2018-06-01
We consider a way of defining quantum Hamiltonians involving particle creation and annihilation based on an interior-boundary condition (IBC) on the wave function, where the wave function is the particle-position representation of a vector in Fock space, and the IBC relates (essentially) the values of the wave function at any two configurations that differ only by the creation of a particle. Here we prove, for a model of particle creation at one or more point sources using the Laplace operator as the free Hamiltonian, that a Hamiltonian can indeed be rigorously defined in this way without the need for any ultraviolet regularization, and that it is self-adjoint. We prove further that introducing an ultraviolet cut-off (thus smearing out particles over a positive radius) and applying a certain known renormalization procedure (taking the limit of removing the cut-off while subtracting a constant that tends to infinity) yields, up to addition of a finite constant, the Hamiltonian defined by the IBC.
Belk, John W.; Marshall, Hayden A.; McCarty, Eric C.; Kraeutler, Matthew J.
2017-01-01
Background: There has been speculation that rest during the regular season for players in the National Basketball Association (NBA) improves player performance in the postseason. Purpose: To determine whether there is a correlation between the amount of regular-season rest among NBA players and playoff performance and injury risk in the same season. Study Design: Cohort study; Level of evidence, 3. Methods: The Basketball Reference and Pro Sports Transactions archives were searched from the 2005 to 2015 seasons. Data were collected on players who missed fewer than 5 regular-season games because of rest (group A) and 5 to 9 regular-season games because of rest (group B) during each season. Inclusion criteria consisted of players who played a minimum of 20 minutes per game and made the playoffs that season. Players were excluded if they missed ≥10 games because of rest or suspension or missed ≥20 games in a season for any reason. Matched pairs were formed between the groups based on the following criteria: position, mean age at the start of the season within 2 years, regular-season minutes per game within 5 minutes, same playoff seeding, and player efficiency rating (PER) within 2 points. The following data from the playoffs were collected and compared between matched pairs at each position (point guard, shooting guard, forward/center): points per game, assists per game, PER, true shooting percentage, blocks, steals, and number of playoff games missed because of injury. Results: A total of 811 players met the inclusion and exclusion criteria (group A: n = 744 players; group B: n = 67 players). Among all eligible players, 27 matched pairs were formed. Within these matched pairs, players in group B missed significantly more regular-season games because of rest than players in group A (6.0 games vs 1.3 games, respectively; P < .0001). There were no significant differences between the groups at any position in terms of points per game, assists per game, PER, true shooting percentage, blocks, steals, or number of playoff games missed because of injury. Conclusion: Rest during the NBA regular season does not improve playoff performance or affect the injury risk during the playoffs in the same season. PMID:29051897
Robust blood-glucose control using Mathematica.
Kovács, Levente; Paláncz, Béla; Benyó, Balázs; Török, László; Benyó, Zoltán
2006-01-01
A robust control design on frequency domain using Mathematica is presented for regularization of glucose level in type I diabetes persons under intensive care. The method originally proposed under Mathematica by Helton and Merino, --now with an improved disturbance rejection constraint inequality--is employed, using a three-state minimal patient model. The robustness of the resulted high-order linear controller is demonstrated by nonlinear closed loop simulation in state-space, in case of standard meal disturbances and is compared with H infinity design implemented with the mu-toolbox of Matlab. The controller designed with model parameters represented the most favorable plant dynamics from the point of view of control purposes, can operate properly even in case of parameter values of the worst-case scenario.
NASA Astrophysics Data System (ADS)
Zolotaryuk, A. V.
2017-06-01
Several families of one-point interactions are derived from the system consisting of two and three δ-potentials which are regularized by piecewise constant functions. In physical terms such an approximating system represents two or three extremely thin layers separated by some distance. The two-scale squeezing of this heterostructure to one point as both the width of δ-approximating functions and the distance between these functions simultaneously tend to zero is studied using the power parameterization through a squeezing parameter \\varepsilon \\to 0 , so that the intensity of each δ-potential is cj =aj \\varepsilon1-μ , aj \\in {R} , j = 1, 2, 3, the width of each layer l =\\varepsilon and the distance between the layers r = c\\varepsilon^τ , c > 0. It is shown that at some values of the intensities a 1, a 2 and a 3, the transmission across the limit point potentials is non-zero, whereas outside these (resonance) values the one-point interactions are opaque splitting the system at the point of singularity into two independent subsystems. Within the interval 1 < μ < 2 , the resonance sets consist of two curves on the (a_1, a_2) -plane and three surfaces in the (a_1, a_2, a_3) -space. As the parameter μ approaches the value μ =2 , three types of splitting the one-point interactions into countable families are observed.
NASA Astrophysics Data System (ADS)
Sulyok, G.
2017-07-01
Starting from the general definition of a one-loop tensor N-point function, we use its Feynman parametrization to calculate the ultraviolet (UV-)divergent part of an arbitrary tensor coefficient in the framework of dimensional regularization. In contrast to existing recursion schemes, we are able to present a general analytic result in closed form that enables direct determination of the UV-divergent part of any one-loop tensor N-point coefficient independent from UV-divergent parts of other one-loop tensor N-point coefficients. Simplified formulas and explicit expressions are presented for A-, B-, C-, D-, E-, and F-functions.
Stickiness in Hamiltonian systems: From sharply divided to hierarchical phase space
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; Motter, Adilson E.; Kantz, Holger
2006-02-01
We investigate the dynamics of chaotic trajectories in simple yet physically important Hamiltonian systems with nonhierarchical borders between regular and chaotic regions with positive measures. We show that the stickiness to the border of the regular regions in systems with such a sharply divided phase space occurs through one-parameter families of marginally unstable periodic orbits and is characterized by an exponent γ=2 for the asymptotic power-law decay of the distribution of recurrence times. Generic perturbations lead to systems with hierarchical phase space, where the stickiness is apparently enhanced due to the presence of infinitely many regular islands and Cantori. In this case, we show that the distribution of recurrence times can be composed of a sum of exponentials or a sum of power laws, depending on the relative contribution of the primary and secondary structures of the hierarchy. Numerical verification of our main results are provided for area-preserving maps, mushroom billiards, and the newly defined magnetic mushroom billiards.
Variable Grid Traveltime Tomography for Near-surface Seismic Imaging
NASA Astrophysics Data System (ADS)
Cai, A.; Zhang, J.
2017-12-01
We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.
Breakthrough Science Enabled by Regular Access to Orbits Beyond Earth
NASA Astrophysics Data System (ADS)
Gorjian, V.
2018-02-01
Regular launches to the Deep Space Gateway (DSG) will enable smallsats to access orbits not currently easily available to low cost missions. These orbits will allow great new science, especially when using the DSG as an optical hub for downlink.
NASA Astrophysics Data System (ADS)
Cho, Yumi
2018-05-01
We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.
About the atomic structures of icosahedral quasicrystals
NASA Astrophysics Data System (ADS)
Quiquandon, Marianne; Gratias, Denis
2014-01-01
This paper is a survey of the crystallographic methods that have been developed these last twenty five years to decipher the atomic structures of the icosahedral stable quasicrystals since their discovery in 1982 by D. Shechtman. After a brief recall of the notion of quasiperiodicity and the natural description of Z-modules in 3-dim as projection of regular lattices in N>3-dim spaces, we give the basic geometrical ingredients useful to describe icosahedral quasicrystals as irrational 3-dim cuts of ordinary crystals in 6-dim space. Atoms are described by atomic surfaces (ASs) that are bounded volumes in the internal (or perpendicular) 3-dim space and the intersections of which with the physical space are the actual atomic positions. The main part of the paper is devoted to finding the major properties of quasicrystalline icosahedral structures. As experimentally demonstrated, they can be described with a surprisingly few high symmetry ASs located at high symmetry special points in 6-dim space. The atomic structures are best described by aggregations and intersections of high symmetry compact interpenetrating atomic clusters. We show here that the experimentally relevant clusters are derived from one generic cluster made of two concentric triacontahedra scaled by τ and an external icosidodecahedron. Depending on which ones of the orbits of this cluster are eventually occupied by atoms, the actual atomic clusters are of type Bergman, Mackay, Tsai and others….
Retrievable payload carrier, next generation Long Duration Exposure Facility: Update 1992
NASA Technical Reports Server (NTRS)
Perry, A. T.; Cagle, J. A.; Newman, S. C.
1993-01-01
Access to space and cost have been two major inhibitors of low Earth orbit research. The Retrievable Payload Carrier (RPC) Program is a commercial space program which strives to overcome these two barriers to space experimentation. The RPC Program's fleet of spacecraft, ground communications station, payload processing facility, and experienced integration and operations team will provide a convenient 'one-stop shop' for investigators seeking to use the unique vantage point and environment of low Earth orbit for research. The RPC is a regularly launched and retrieved, free-flying spacecraft providing resources adequate to meet modest payload/experiment requirements, and presenting ample surface area, volume, mass, and growth capacity for investigator usage. Enhanced capabilities of ground communications, solar-array-supplied electrical power, central computing, and on-board data storage pick up on the path where NASA's Long Duration Exposure Facility (LDEF) blazed the original technology trail. Mission lengths of 6-18 months, or longer, are envisioned. The year 1992 was designated as the 'International Space Year' and coincides with the 500th anniversary of Christopher Columbus's voyage to the New World. This is a fitting year in which to launch the full scale development of our unique shop of discovery whose intent is to facilitate retrieving technological rewards from another new world: space. Presented is an update on progress made on the RPC Program's development since the November 1991 LDEF Materials Workshop.
Spatial Analysis of “Crazy Quilts”, a Class of Potentially Random Aesthetic Artefacts
Westphal-Fitch, Gesche; Fitch, W. Tecumseh
2013-01-01
Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. “Crazy quilts” represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures. PMID:24066095
Spatial analysis of "crazy quilts", a class of potentially random aesthetic artefacts.
Westphal-Fitch, Gesche; Fitch, W Tecumseh
2013-01-01
Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. "Crazy quilts" represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures.
A Potential Proxy of the Second Integral of Motion (I2) in a Rotating Barred Potential
NASA Astrophysics Data System (ADS)
Shen, Juntai; Qin, Yujing
2017-06-01
The only analytically known integral of motion in a 2-D rotating barred potential is the Jacobi constant (EJ). In addition to EJ, regular orbits also obey a second integral of motion (I2) whose analytical form is unknown. We show that the time-averaged characteristics of angular momentum in a rotating bar potential resemble the behavior of the analytically-unknown I2. For a given EJ, regular orbits of various families follow a continuous sequence in the space of net angular momentum and its dispersion ("angular momentum space"). In the limiting case where regular orbits of the well-known x1/x4 orbital families dominate the phase space, the orbital sequence can be monotonically traced by a single parameter, namely the ratio of mean angular momentum to its dispersion. This ratio behaves well even in the 3-D case, and thus may be used as a proxy of I2. The potential proxy of I2 may be used as an efficient way to probe the phase space structure, and a convenient new scheme of orbit classification in addition to the frequency mapping technique.
NASA Astrophysics Data System (ADS)
Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.
2018-07-01
Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.
Regular Gleason Measures and Generalized Effect Algebras
NASA Astrophysics Data System (ADS)
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin
2012-11-21
New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.
Regularity and predictability of human mobility in personal space.
Austin, Daniel; Cross, Robin M; Hayes, Tamara; Kaye, Jeffrey
2014-01-01
Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity) is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.
NASA Astrophysics Data System (ADS)
Lorquet, J. C.
2017-04-01
The atom-diatom interaction is studied by classical mechanics using Jacobi coordinates (R, r, θ). Reactivity criteria that go beyond the simple requirement of transition state theory (i.e., PR* > 0) are derived in terms of specific initial conditions. Trajectories that exactly fulfill these conditions cross the conventional dividing surface used in transition state theory (i.e., the plane in configuration space passing through a saddle point of the potential energy surface and perpendicular to the reaction coordinate) only once. Furthermore, they are observed to be strikingly similar and to form a tightly packed bundle of perfectly collimated trajectories in the two-dimensional (R, r) configuration space, although their angular motion is highly specific for each one. Particular attention is paid to symmetrical transition states (i.e., either collinear or T-shaped with C2v symmetry) for which decoupling between angular and radial coordinates is observed, as a result of selection rules that reduce to zero Coriolis couplings between modes that belong to different irreducible representations. Liapunov exponents are equal to zero and Hamilton's characteristic function is planar in that part of configuration space that is visited by reactive trajectories. Detailed consideration is given to the concept of average reactive trajectory, which starts right from the saddle point and which is shown to be free of curvature-induced Coriolis coupling. The reaction path Hamiltonian model, together with a symmetry-based separation of the angular degree of freedom, provides an appropriate framework that leads to the formulation of an effective two-dimensional Hamiltonian. The success of the adiabatic approximation in this model is due to the symmetry of the transition state, not to a separation of time scales. Adjacent trajectories, i.e., those that do not exactly fulfill the reactivity conditions have similar characteristics, but the quality of the approximation is lower. At higher energies, these characteristics persist, but to a lesser degree. Recrossings of the dividing surface then become much more frequent and the phase space volumes of initial conditions that generate recrossing-free trajectories decrease. Altogether, one ends up with an additional illustration of the concept of reactive cylinder (or conduit) in phase space that reactive trajectories must follow. Reactivity is associated with dynamical regularity and dimensionality reduction, whatever the shape of the potential energy surface, no matter how strong its anharmonicity, and whatever the curvature of its reaction path. Both simplifying features persist during the entire reactive process, up to complete separation of fragments. The ergodicity assumption commonly assumed in statistical theories is inappropriate for reactive trajectories.
Causality in time-neutral cosmologies
NASA Astrophysics Data System (ADS)
Kent, Adrian
1999-02-01
Gell-Mann and Hartle (GMH) have recently considered time-neutral cosmological models in which the initial and final conditions are independently specified, and several authors have investigated experimental tests of such models. We point out here that GMH time-neutral models can allow superluminal signaling, in the sense that it can be possible for observers in those cosmologies, by detecting and exploiting regularities in the final state, to construct devices which send and receive signals between space-like separated points. In suitable cosmologies, any single superluminal message can be transmitted with probability arbitrarily close to one by the use of redundant signals. However, the outcome probabilities of quantum measurements generally depend on precisely which past and future measurements take place. As the transmission of any signal relies on quantum measurements, its transmission probability is similarly context dependent. As a result, the standard superluminal signaling paradoxes do not apply. Despite their unusual features, the models are internally consistent. These results illustrate an interesting conceptual point. The standard view of Minkowski causality is not an absolutely indispensable part of the mathematical formalism of relativistic quantum theory. It is contingent on the empirical observation that naturally occurring ensembles can be naturally pre-selected but not post-selected.
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
NASA Astrophysics Data System (ADS)
Alves, Claudianor O.; Miyagaki, Olímpio H.
2017-08-01
In this paper, we establish some results concerning the existence, regularity, and concentration phenomenon of nontrivial solitary waves for a class of generalized variable coefficient Kadomtsev-Petviashvili equation. Variational methods are used to get an existence result, as well as, to study the concentration phenomenon, while the regularity is more delicate because we are leading with functions in an anisotropic Sobolev space.
Regular-to-Chaotic Tunneling Rates: From the Quantum to the Semiclassical Regime
NASA Astrophysics Data System (ADS)
Löck, Steffen; Bäcker, Arnd; Ketzmerick, Roland; Schlagheck, Peter
2010-03-01
We derive a prediction of dynamical tunneling rates from regular to chaotic phase-space regions combining the direct regular-to-chaotic tunneling mechanism in the quantum regime with an improved resonance-assisted tunneling theory in the semiclassical regime. We give a qualitative recipe for identifying the relevance of nonlinear resonances in a given ℏ regime. For systems with one or multiple dominant resonances we find excellent agreement to numerics.
Bayesian Inference for Generalized Linear Models for Spiking Neurons
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
Hadfield performs regular maintenance on Biolab, in the Columbus Module
2013-02-20
ISS034-E-051715 (20 Feb. 2013) --- Canadian Space Agency astronaut Chris Hadfield, Expedition 34 flight engineer, performs routine maintenance on Biolab in the Columbus Module aboard the International Space Station.
Critical end point in the presence of a chiral chemical potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Z. -F.; Cloët, I. C.; Lu, Y.
A class of Polyakov-loop-modified Nambu-Jona-Lasinio models has been used to support a conjecture that numerical simulations of lattice-regularized QCD defined with a chiral chemical potential can provide information about the existence and location of a critical end point in the QCD phase diagram drawn in the plane spanned by baryon chemical potential and temperature. That conjecture is challenged by conflicts between the model results and analyses of the same problem using simulations of lattice-regularized QCD (lQCD) and well-constrained Dyson-Schwinger equation (DSE) studies. We find the conflict is resolved in favor of the lQCD and DSE predictions when both a physicallymore » motivated regularization is employed to suppress the contribution of high-momentum quark modes in the definition of the effective potential connected with the Polyakov-loop-modified Nambu-Jona-Lasinio models and the four-fermion coupling in those models does not react strongly to changes in the mean field that is assumed to mock-up Polyakov-loop dynamics. With the lQCD and DSE predictions thus confirmed, it seems unlikely that simulations of lQCD with mu(5) > 0 can shed any light on a critical end point in the regular QCD phase diagram.« less
Voxel inversion of airborne electromagnetic data
NASA Astrophysics Data System (ADS)
Auken, E.; Fiandaca, G.; Kirkegaard, C.; Vest Christiansen, A.
2013-12-01
Inversion of electromagnetic data usually refers to a model space being linked to the actual observation points, and for airborne surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space. This means that incorporating the geophysical data into the geological and/or hydrological modelling grids involves a spatial relocation of the models, which in itself is a subtle process where valuable information is easily lost. Also the integration of prior information, e.g. from boreholes, is difficult when the observation points do not coincide with the position of the prior information, as well as the joint inversion of airborne and ground-based surveys. We developed a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models, for easier incorporation of prior information and for straightforward integration of different data types in joint inversion. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the properties is computed everywhere by means of an interpolation function f (e.g. inverse distance or kriging). The position of the nodes is fixed during the inversion and is chosen to sample the soil taking into account topography and inversion resolution. Given this definition of the voxel model space, both 1D and 2D/3D forward responses can be computed. The 1D forward responses are computed as follows: A) a 1D model subdivision, in terms of model thicknesses and direction of the "virtual" horizontal stratification, is defined for each 1D data set. For EM soundings the "virtual" horizontal stratification is set up parallel to the topography at the sounding position. B) the "virtual" 1D models are constructed by interpolating the soil properties in the medium point of the "virtual" layers. For 2D/3D forward responses the algorithm operates similarly, simply filling the 2D/3D meshes of the forward responses by computing the interpolation values in the centres of the mesh cells. The new definition of the voxel model space allows for incorporating straightforwardly the geophysical information into geological and/or hydrological models, just by using for defining the geophysical model space a voxel (hydro)geological grid. This simplify also the propagation of the uncertainty of geophysical parameters into the (hydro)geological models. Furthermore, prior information from boreholes, like resistivity logs, can be applied directly to the voxel model space, even if the borehole positions do not coincide with the actual observation points. In fact, the prior information is constrained to the model parameters through the interpolation function at the borehole locations. The presented algorithm is a further development of the AarhusInv program package developed at Aarhus University (formerly em1dinv), which manages both large scale AEM surveys and ground-based data. This work has been carried out as part of the HyGEM project, supported by the Danish Council of Strategic Research under grant number DSF 11-116763.
Exploring the spectrum of regularized bosonic string theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambjørn, J., E-mail: ambjorn@nbi.dk; Makeenko, Y., E-mail: makeenko@nbi.dk
2015-03-15
We implement a UV regularization of the bosonic string by truncating its mode expansion and keeping the regularized theory “as diffeomorphism invariant as possible.” We compute the regularized determinant of the 2d Laplacian for the closed string winding around a compact dimension, obtaining the effective action in this way. The minimization of the effective action reliably determines the energy of the string ground state for a long string and/or for a large number of space-time dimensions. We discuss the possibility of a scaling limit when the cutoff is taken to infinity.
The Amusement Arcade as a Social Space for Adolescents: An Empirical Study.
ERIC Educational Resources Information Center
Fisher, Sue
1995-01-01
Gathered data on arcade use in adolescents (n=460). "Regular" arcade visitors varied sufficiently from the more "casual" visitors in their orientation to, and experience in, arcades. Regular visitors were more likely to score positively on indices screening for addiction. Raises questions about children's access to potentially…
Paparo, M.; Benko, J. M.; Hareter, M.; ...
2016-05-11
In this study, a sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT. We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequencesmore » (echelle ridges) were found in the 5–21 d –1 region where the pairs of the sequences are shifted (between 0.5 and 0.59 d –1) by twice the value of the estimated rotational splitting frequency (0.269 d –1). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d –1) are in better agreement with the sum of a possible 1.710 d –1 large separation and two or one times, respectively, the value of the rotational frequency.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paparo, M.; Benko, J. M.; Hareter, M.
In this study, a sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT. We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequencesmore » (echelle ridges) were found in the 5–21 d –1 region where the pairs of the sequences are shifted (between 0.5 and 0.59 d –1) by twice the value of the estimated rotational splitting frequency (0.269 d –1). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d –1) are in better agreement with the sum of a possible 1.710 d –1 large separation and two or one times, respectively, the value of the rotational frequency.« less
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.
ERIC Educational Resources Information Center
Randal, Judith
1978-01-01
The "Getaway Special" is NASA's semiofficial program for low-budget researchers, who can arrange bookings for their own space experiments on regular flights of the space shuttle. Information about arranging for NASA to take individual experiment packages is presented. (LBH)
Regular and Chaotic Spatial Distribution of Bose-Einstein Condensed Atoms in a Ratchet Potential
NASA Astrophysics Data System (ADS)
Li, Fei; Xu, Lan; Li, Wenwu
2018-02-01
We study the regular and chaotic spatial distribution of Bose-Einstein condensed atoms with a space-dependent nonlinear interaction in a ratchet potential. There exists in the system a space-dependent atomic current that can be tuned via Feshbach resonance technique. In the presence of the space-dependent atomic current and a weak ratchet potential, the Smale-horseshoe chaos is studied and the Melnikov chaotic criterion is obtained. Numerical simulations show that the ratio between the intensities of optical potentials forming the ratchet potential, the wave vector of the laser producing the ratchet potential or the wave vector of the modulating laser can be chosen as the controlling parameters to result in or avoid chaotic spatial distributional states.
Long-Time Behavior and Critical Limit of Subcritical SQG Equations in Scale-Invariant Sobolev Spaces
NASA Astrophysics Data System (ADS)
Coti Zelati, Michele
2018-02-01
We consider the subcritical SQG equation in its natural scale-invariant Sobolev space and prove the existence of a global attractor of optimal regularity. The proof is based on a new energy estimate in Sobolev spaces to bootstrap the regularity to the optimal level, derived by means of nonlinear lower bounds on the fractional Laplacian. This estimate appears to be new in the literature and allows a sharp use of the subcritical nature of the L^∞ bounds for this problem. As a by-product, we obtain attractors for weak solutions as well. Moreover, we study the critical limit of the attractors and prove their stability and upper semicontinuity with respect to the strength of the diffusion.
Living in Space. A Preschool Aerospace Curriculum Module.
ERIC Educational Resources Information Center
Young Astronaut Council, Washington, DC.
This program is designed to be an extension of the regular curriculum providing preschool children with a firm foundation and life-long appreciation for space and space-related topics. The program delivers both classroom and at-home family activities which emphasize age-appropriate language, math, art, science, nutrition, and health concepts…
Visible, invisible and trapped ghosts as sources of wormholes and black universes
NASA Astrophysics Data System (ADS)
Bolokhov, S. V.; Bronnikov, K. A.; Korolyov, P. A.; Skvortsova, M. V.
2016-02-01
We construct explicit examples of globally regular static, spherically symmetric solutions in general relativity with scalar and electromagnetic fields, describing traversable wormholes with flat and AdS asymptotics and regular black holes, in particular, black universes. (A black universe is a regular black hole with an expanding, asymptotically isotropic space-time beyond the horizon.) Such objects exist in the presence of scalar fields with negative kinetic energy (“phantoms”, or “ghosts”), which are not observed under usual physical conditions. To account for that, we consider what we call “trapped ghosts” (scalars whose kinetic energy is only negative in a strong-field region of space-time) and “invisible ghosts”, i.e., phantom scalar fields sufficiently rapidly decaying in the weak-field region. The resulting configurations contain different numbers of Killing horizons, from zero to four.
A Varifold Approach to Surface Approximation
NASA Astrophysics Data System (ADS)
Buet, Blanche; Leonardi, Gian Paolo; Masnou, Simon
2017-11-01
We show that the theory of varifolds can be suitably enriched to open the way to applications in the field of discrete and computational geometry. Using appropriate regularizations of the mass and of the first variation of a varifold we introduce the notion of approximate mean curvature and show various convergence results that hold, in particular, for sequences of discrete varifolds associated with point clouds or pixel/voxel-type discretizations of d-surfaces in the Euclidean n-space, without restrictions on dimension and codimension. The variational nature of the approach also allows us to consider surfaces with singularities, and in that case the approximate mean curvature is consistent with the generalized mean curvature of the limit surface. A series of numerical tests are provided in order to illustrate the effectiveness and generality of the method.
Generalised solutions for fully nonlinear PDE systems and existence-uniqueness theorems
NASA Astrophysics Data System (ADS)
Katzourakis, Nikos
2017-07-01
We introduce a new theory of generalised solutions which applies to fully nonlinear PDE systems of any order and allows for merely measurable maps as solutions. This approach bypasses the standard problems arising by the application of Distributions to PDEs and is not based on either integration by parts or on the maximum principle. Instead, our starting point builds on the probabilistic representation of derivatives via limits of difference quotients in the Young measures over a toric compactification of the space of jets. After developing some basic theory, as a first application we consider the Dirichlet problem and we prove existence-uniqueness-partial regularity of solutions to fully nonlinear degenerate elliptic 2nd order systems and also existence of solutions to the ∞-Laplace system of vectorial Calculus of Variations in L∞.
Image degradation characteristics and restoration based on regularization for diffractive imaging
NASA Astrophysics Data System (ADS)
Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun
2017-11-01
The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.
Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.
Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin
2013-09-01
Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lansard, Erick; Frayssinhes, Eric; Palmade, Jean-Luc
Basically, the problem of designing a multisatellite constellation exhibits a lot of parameters with many possible combinations: total number of satellites, orbital parameters of each individual satellite, number of orbital planes, number of satellites in each plane, spacings between satellites of each plane, spacings between orbital planes, relative phasings between consecutive orbital planes. Hopefully, some authors have theoretically solved this complex problem under simplified assumptions: the permanent (or continuous) coverage by a single and multiple satellites of the whole Earth and zonal areas has been entirely solved from a pure geometrical point of view. These solutions exhibit strong symmetry properties (e.g. Walker, Ballard, Rider, Draim constellations): altitude and inclination are identical, orbital planes and satellites are regularly spaced, etc. The problem with such constellations is their oversimplified and restricted geometrical assumption. In fact, the evaluation function which is used implicitly only takes into account the point-to-point visibility between users and satellites and does not deal with very important constraints and considerations that become mandatory when designing a real satellite system (e.g. robustness to satellite failures, total system cost, common view between satellites and ground stations, service availability and satellite reliability, launch and early operations phase, production constraints, etc.). An original and global methodology relying on a powerful optimization tool based on genetic algorithms has been developed at ALCATEL ESPACE. In this approach, symmetrical constellations can be used as initial conditions of the optimization process together with specific evaluation functions. A multi-criteria performance analysis is conducted and presented here in a parametric way in order to identify and evaluate the main sensitive parameters. Quantitative results are given for three examples in the fields of navigation, telecommunication and multimedia satellite systems. In particular, a new design pattern with very efficient properties in terms of robustness to satellite failures is presented and compared with classical Walker patterns.
Exercisers achieve greater acute exercise-induced mood enhancement than nonexercisers.
Hoffman, Martin D; Hoffman, Debi Rufi
2008-02-01
To determine whether a single session of exercise of appropriate intensity and duration for aerobic conditioning has a different acute effect on mood for nonexercisers than regular exercisers. Repeated-measures design. Research laboratory. Adult nonexercisers, moderate exercisers, and ultramarathon runners (8 men, 8 women in each group). Treadmill exercise at self-selected speeds to induce a rating of perceived exertion (RPE) of 13 (somewhat hard) for 20 minutes, preceded and followed by 5 minutes at an RPE of 9 (very light). Profile of Mood States before and 5 minutes after exercise. Vigor increased by a mean +/- standard deviation of 8+/-7 points (95% confidence interval [CI], 5-12) among the ultramarathon runners and 5+/-4 points (95% CI, 2-9) among the moderate exercisers, with no improvement among the nonexercisers. Fatigue decreased by 5+/-6 points (95% CI, 2-8) for the ultramarathon runners and 4+/-4 points (95% CI, 1-7) for the moderate exercisers, with no improvement among the nonexercisers. Postexercise total mood disturbance decreased by a mean of 21+/-16 points (95% CI, 12-29) among the ultramarathon runners, 16+/-10 points (95% CI, 7-24) among the moderate exercisers, and 9+/-13 points (95% CI, 1-18) among the nonexercisers. A single session of moderate aerobic exercise improves vigor and decreases fatigue among regular exercisers but causes no change in these scores for nonexercisers. Although total mood disturbance improves postexercise in exercisers and nonexercisers, regular exercisers have approximately twice the effect as nonexercisers. This limited postexercise mood improvement among nonexercisers may be an important deterrent for persistence with an exercise program.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
Reconstruction of 3d Models from Point Clouds with Hybrid Representation
NASA Astrophysics Data System (ADS)
Hu, P.; Dong, Z.; Yuan, P.; Liang, F.; Yang, B.
2018-05-01
The three-dimensional (3D) reconstruction of urban buildings from point clouds has long been an active topic in applications related to human activities. However, due to the structures significantly differ in terms of complexity, the task of 3D reconstruction remains a challenging issue especially for the freeform surfaces. In this paper, we present a new reconstruction algorithm which allows the 3D-models of building as a combination of regular structures and irregular surfaces, where the regular structures are parameterized plane primitives and the irregular surfaces are expressed as meshes. The extraction of irregular surfaces starts with an over-segmented method for the unstructured point data, a region growing approach based the adjacent graph of super-voxels is then applied to collapse these super-voxels, and the freeform surfaces can be clustered from the voxels filtered by a thickness threshold. To achieve these regular planar primitives, the remaining voxels with a larger flatness will be further divided into multiscale super-voxels as basic units, and the final segmented planes are enriched and refined in a mutually reinforcing manner under the framework of a global energy optimization. We have implemented the proposed algorithms and mainly tested on two point clouds that differ in point density and urban characteristic, and experimental results on complex building structures illustrated the efficacy of the proposed framework.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Dissipative structure and global existence in critical space for Timoshenko system of memory type
NASA Astrophysics Data System (ADS)
Mori, Naofumi
2018-08-01
In this paper, we consider the initial value problem for the Timoshenko system with a memory term in one dimensional whole space. In the first place, we consider the linearized system: applying the energy method in the Fourier space, we derive the pointwise estimate of the solution in the Fourier space, which first gives the optimal decay estimate of the solution. Next, we give a characterization of the dissipative structure of the system by using the spectral analysis, which confirms our pointwise estimate is optimal. In the second place, we consider the nonlinear system: we show that the global-in-time existence and uniqueness result could be proved in the minimal regularity assumption in the critical Sobolev space H2. In the proof we don't need any time-weighted norm as recent works; we use just an energy method, which is improved to overcome the difficulties caused by regularity-loss property of Timoshenko system.
Nonrigid iterative closest points for registration of 3D biomedical surfaces
NASA Astrophysics Data System (ADS)
Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee
2018-01-01
Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
Constrained H1-regularization schemes for diffeomorphic image registration
Mang, Andreas; Biros, George
2017-01-01
We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361
Singular lensing from the scattering on special space-time defects
NASA Astrophysics Data System (ADS)
Mavromatos, Nick E.; Papavassiliou, Joannis
2018-01-01
It is well known that certain special classes of self-gravitating point-like defects, such as global (non gauged) monopoles, give rise to non-asymptotically flat space-times characterized by solid angle deficits, whose size depends on the details of the underlying microscopic models. The scattering of electrically neutral particles on such space-times is described by amplitudes that exhibit resonant behaviour when thescattering and deficit angles coincide. This, in turn, leads to ring-like structures where the cross sections are formally divergent ("singular lensing"). In this work, we revisit this particular phenomenon, with the twofold purpose of placing it in a contemporary and more general context, in view of renewed interest in the theory and general phenomenology of such defects, and, more importantly, of addressing certain subtleties that appear in the particular computation that leads to the aforementioned effect. In particular, by adopting a specific regularization procedure for the formally infinite Legendre series encountered, we manage to ensure the recovery of the Minkowski space-time, and thus the disappearance of the lensing phenomenon, in the no-defect limit, and the validity of the optical theorem for the elastic total cross section. In addition, the singular nature of the phenomenon is confirmed by means of an alternative calculation, which, unlike the original approach, makes no use of the generating function of the Legendre polynomials, but rather exploits the asymptotic properties of the Fresnel integrals.
46 CFR 108.151 - Two means required.
Code of Federal Regulations, 2014 CFR
2014-10-01
... following must have at least 2 means of escape: (1) Each accommodation space with a deck area of at least 27 sq. meters (300 sq. ft.). (2) Each space, other than an accommodation space, that is continuously manned or used on a regular working basis except for routine security checks. (3) Weather deck areas...
46 CFR 108.151 - Two means required.
Code of Federal Regulations, 2012 CFR
2012-10-01
... following must have at least 2 means of escape: (1) Each accommodation space with a deck area of at least 27 sq. meters (300 sq. ft.). (2) Each space, other than an accommodation space, that is continuously manned or used on a regular working basis except for routine security checks. (3) Weather deck areas...
46 CFR 108.151 - Two means required.
Code of Federal Regulations, 2011 CFR
2011-10-01
... following must have at least 2 means of escape: (1) Each accommodation space with a deck area of at least 27 sq. meters (300 sq. ft.). (2) Each space, other than an accommodation space, that is continuously manned or used on a regular working basis except for routine security checks. (3) Weather deck areas...
46 CFR 108.151 - Two means required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... following must have at least 2 means of escape: (1) Each accommodation space with a deck area of at least 27 sq. meters (300 sq. ft.). (2) Each space, other than an accommodation space, that is continuously manned or used on a regular working basis except for routine security checks. (3) Weather deck areas...
46 CFR 108.151 - Two means required.
Code of Federal Regulations, 2010 CFR
2010-10-01
... following must have at least 2 means of escape: (1) Each accommodation space with a deck area of at least 27 sq. meters (300 sq. ft.). (2) Each space, other than an accommodation space, that is continuously manned or used on a regular working basis except for routine security checks. (3) Weather deck areas...
Testing & Validating: 3D Seismic Travel Time Tomography (Detailed Shallow Subsurface Imaging)
NASA Astrophysics Data System (ADS)
Marti, David; Marzan, Ignacio; Alvarez-Marron, Joaquina; Carbonell, Ramon
2016-04-01
A detailed full 3 dimensional P wave seismic velocity model was constrained by a high-resolution seismic tomography experiment. A regular and dense grid of shots and receivers was use to image a 500x500x200 m volume of the shallow subsurface. 10 GEODE's resulting in a 240 channels recording system and a 250 kg weight drop were used for the acquisition. The recording geometry consisted in 10x20m geophone grid spacing, and a 20x20 m stagered source spacing. A total of 1200 receivers and 676 source points. The study area is located within the Iberian Meseta, in Villar de Cañas (Cuenca, Spain). The lithological/geological target consisted in a Neogen sedimentary sequence formed from bottom to top by a transition from gyspum to silstones. The main objectives consisted in resolving the underground structure: contacts/discontinuities; constrain the 3D geometry of the lithology (possible cavities, faults/fractures). These targets were achieved by mapping the 3D distribution of the physical properties (P-wave velocity). The regularly space dense acquisition grid forced to acquire the survey in different stages and with a variety of weather conditions. Therefore, a careful quality control was required. More than a half million first arrivals were inverted to provide a 3D Vp velocity model that reached depths of 120 m in the areas with the highest ray coverage. An extended borehole campaign, that included borehole geophysical measurements in some wells provided unique tight constraints on the lithology an a validation scheme for the tomographic results. The final image reveals a laterally variable structure consisting of four different lithological units. In this methodological validation test travel-time tomography features a high capacity of imaging in detail the lithological contrasts for complex structures located at very shallow depths.
Effects of crustal layering on source parameter inversion from coseismic geodetic data
NASA Astrophysics Data System (ADS)
Amoruso, A.; Crescentini, L.; Fidani, C.
2004-10-01
We study the effect of a superficial layer overlying a half-space on the surface displacements caused by uniform slipping of a dip-slip normal rectangular fault. We compute static coseismic displacements using a 3-D analytical code for different characteristics of the layered medium, different fault geometries and different configurations of bench marks to simulate different kinds of geodetic data (GPS, Synthetic Aperture Radar, and levellings). We perform both joint and separate inversions of the three components of synthetic displacement without constraining fault parameters, apart from strike and rake, and using a non-linear global inversion technique under the assumption of homogeneous half-space. Differences between synthetic displacements computed in the presence of the superficial soft layer and in a homogeneous half-space do not show a simple regular behaviour, even if a few features can be identified. Consequently, also retrieved parameters of the homogeneous equivalent fault obtained by unconstrained inversion of surface displacements do not show a simple regular behaviour. We point out that the presence of a superficial layer may lead to misestimating several fault parameters both using joint and separate inversions of the three components of synthetic displacement and that the effects of the presence of the superficial layer can change whether all fault parameters are left free in the inversions or not. In the inversion of any kind of coseismic geodetic data, fault size and slip can be largely misestimated, but the product (fault length) × (fault width) × slip, which is proportional to the seismic moment for a given rigidity modulus, is often well determined (within a few per cent). Because inversion of coseismic geodetic data assuming a layered medium is impracticable, we suggest that only a case-to-case study involving some kind of recursive determination of fault parameters through data correction seems to give the proper approach when layering is important.
Strong liquid-crystalline polymeric compositions
Dowell, Flonnie
1993-01-01
Strong liquid-crystalline polymeric (LCP) compositions of matter. LCP backbones are combined with liquid crystalline (LC) side chains in a manner which maximizes molecular ordering through interdigitation of the side chains, thereby yielding materials which are predicted to have superior mechanical properties over existing LCPs. The theoretical design of LCPs having such characteristics includes consideration of the spacing distance between side chains along the backbone, the need for rigid sections in the backbone and in the side chains, the degree of polymerization, the length of the side chains, the regularity of the spacing of the side chains along the backbone, the interdigitation of side chains in sub-molecular strips, the packing of the side chains on one or two sides of the backbone to which they are attached, the symmetry of the side chains, the points of attachment of the side chains to the backbone, the flexibility and size of the chemical group connecting each side chain to the backbone, the effect of semiflexible sections in the backbone and the side chains, and the choice of types of dipolar and/or hydrogen bonding forces in the backbones and the side chains for easy alignment.
Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao
2017-09-01
Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.
2014-01-01
Background The built environment in which older people live plays an important role in promoting or inhibiting physical activity. Most work on this complex relationship between physical activity and the environment has excluded people with reduced physical function or ignored the difference between groups with different levels of physical function. This study aims to explore the role of neighbourhood green space in determining levels of participation in physical activity among elderly men with different levels of lower extremity physical function. Method Using data collected from the Caerphilly Prospective Study (CaPS) and green space data collected from high resolution Landmap true colour aerial photography, we first investigated the effect of the quantity of neighbourhood green space and the variation in neighbourhood vegetation on participation in physical activity for 1,010 men aged 66 and over in Caerphilly county borough, Wales, UK. Second, we explored whether neighbourhood green space affects groups with different levels of lower extremity physical function in different ways. Results Increasing percentage of green space within a 400 meters radius buffer around the home was significantly associated with more participation in physical activity after adjusting for lower extremity physical function, psychological distress, general health, car ownership, age group, marital status, social class, education level and other environmental factors (OR = 1.21, 95% CI 1.05, 1.41). A statistically significant interaction between the variation in neighbourhood vegetation and lower extremity physical function was observed (OR = 1.92, 95% CI 1.12, 3.28). Conclusion Elderly men living in neighbourhoods with more green space have higher levels of participation in regular physical activity. The association between variation in neighbourhood vegetation and regular physical activity varied according to lower extremity physical function. Subjects reporting poor lower extremity physical function living in neighbourhoods with more homogeneous vegetation (i.e. low variation) were more likely to participate in regular physical activity than those living in neighbourhoods with less homogeneous vegetation (i.e. high variation). Good lower extremity physical function reduced the adverse effect of high variation vegetation on participation in regular physical activity. This provides a basis for the future development of novel interventions that aim to increase levels of physical activity in later life, and has implications for planning policy to design, preserve, facilitate and encourage the use of green space near home. PMID:24646136
Gong, Yi; Gallacher, John; Palmer, Stephen; Fone, David
2014-03-19
The built environment in which older people live plays an important role in promoting or inhibiting physical activity. Most work on this complex relationship between physical activity and the environment has excluded people with reduced physical function or ignored the difference between groups with different levels of physical function. This study aims to explore the role of neighbourhood green space in determining levels of participation in physical activity among elderly men with different levels of lower extremity physical function. Using data collected from the Caerphilly Prospective Study (CaPS) and green space data collected from high resolution Landmap true colour aerial photography, we first investigated the effect of the quantity of neighbourhood green space and the variation in neighbourhood vegetation on participation in physical activity for 1,010 men aged 66 and over in Caerphilly county borough, Wales, UK. Second, we explored whether neighbourhood green space affects groups with different levels of lower extremity physical function in different ways. Increasing percentage of green space within a 400 meters radius buffer around the home was significantly associated with more participation in physical activity after adjusting for lower extremity physical function, psychological distress, general health, car ownership, age group, marital status, social class, education level and other environmental factors (OR = 1.21, 95% CI 1.05, 1.41). A statistically significant interaction between the variation in neighbourhood vegetation and lower extremity physical function was observed (OR = 1.92, 95% CI 1.12, 3.28). Elderly men living in neighbourhoods with more green space have higher levels of participation in regular physical activity. The association between variation in neighbourhood vegetation and regular physical activity varied according to lower extremity physical function. Subjects reporting poor lower extremity physical function living in neighbourhoods with more homogeneous vegetation (i.e. low variation) were more likely to participate in regular physical activity than those living in neighbourhoods with less homogeneous vegetation (i.e. high variation). Good lower extremity physical function reduced the adverse effect of high variation vegetation on participation in regular physical activity. This provides a basis for the future development of novel interventions that aim to increase levels of physical activity in later life, and has implications for planning policy to design, preserve, facilitate and encourage the use of green space near home.
Scaled lattice fermion fields, stability bounds, and regularity
NASA Astrophysics Data System (ADS)
O'Carroll, Michael; Faria da Veiga, Paulo A.
2018-02-01
We consider locally gauge-invariant lattice quantum field theory models with locally scaled Wilson-Fermi fields in d = 1, 2, 3, 4 spacetime dimensions. The use of scaled fermions preserves Osterwalder-Seiler positivity and the spectral content of the models (the decay rates of correlations are unchanged in the infinite lattice). In addition, it also results in less singular, more regular behavior in the continuum limit. Precisely, we treat general fermionic gauge and purely fermionic lattice models in an imaginary-time functional integral formulation. Starting with a hypercubic finite lattice Λ ⊂(aZ ) d, a ∈ (0, 1], and considering the partition function of non-Abelian and Abelian gauge models (the free fermion case is included) neglecting the pure gauge interactions, we obtain stability bounds uniformly in the lattice spacing a ∈ (0, 1]. These bounds imply, at least in the subsequential sense, the existence of the thermodynamic (Λ ↗ (aZ ) d) and the continuum (a ↘ 0) limits. Specializing to the U(1) gauge group, the known non-intersecting loop expansion for the d = 2 partition function is extended to d = 3 and the thermodynamic limit of the free energy is shown to exist with a bound independent of a ∈ (0, 1]. In the case of scaled free Fermi fields (corresponding to a trivial gauge group with only the identity element), spectral representations are obtained for the partition function, free energy, and correlations. The thermodynamic and continuum limits of the free fermion free energy are shown to exist. The thermodynamic limit of n-point correlations also exist with bounds independent of the point locations and a ∈ (0, 1], and with no n! dependence. Also, a time-zero Hilbert-Fock space is constructed, as well as time-zero, spatially pointwise scaled fermion creation operators which are shown to be norm bounded uniformly in a ∈ (0, 1]. The use of our scaled fields since the beginning allows us to extract and isolate the singularities of the free energy when a ↘ 0.
Pointwise regularity of parameterized affine zipper fractal curves
NASA Astrophysics Data System (ADS)
Bárány, Balázs; Kiss, Gergely; Kolossváry, István
2018-05-01
We study the pointwise regularity of zipper fractal curves generated by affine mappings. Under the assumption of dominated splitting of index-1, we calculate the Hausdorff dimension of the level sets of the pointwise Hölder exponent for a subinterval of the spectrum. We give an equivalent characterization for the existence of regular pointwise Hölder exponent for Lebesgue almost every point. In this case, we extend the multifractal analysis to the full spectrum. In particular, we apply our results for de Rham’s curve.
A regularity condition and temporal asymptotics for chemotaxis-fluid equations
NASA Astrophysics Data System (ADS)
Chae, Myeongju; Kang, Kyungkeun; Lee, Jihoon; Lee, Ki-Ahm
2018-02-01
We consider two dimensional chemotaxis equations coupled to the Navier-Stokes equations. We present a new localized regularity criterion that is localized in a neighborhood at each point. Secondly, we establish temporal decays of the regular solutions under the assumption that the initial mass of biological cell density is sufficiently small. Both results are improvements of previously known results given in Chae et al (2013 Discrete Continuous Dyn. Syst. A 33 2271-97) and Chae et al (2014 Commun. PDE 39 1205-35)
Reverse bifurcation and fractal of the compound logistic map
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Liang, Qingyong
2008-07-01
The nature of the fixed points of the compound logistic map is researched and the boundary equation of the first bifurcation of the map in the parameter space is given out. Using the quantitative criterion and rule of chaotic system, the paper reveal the general features of the compound logistic map transforming from regularity to chaos, the following conclusions are shown: (1) chaotic patterns of the map may emerge out of double-periodic bifurcation and (2) the chaotic crisis phenomena and the reverse bifurcation are found. At the same time, we analyze the orbit of critical point of the compound logistic map and put forward the definition of Mandelbrot-Julia set of compound logistic map. We generalize the Welstead and Cromer's periodic scanning technology and using this technology construct a series of Mandelbrot-Julia sets of compound logistic map. We investigate the symmetry of Mandelbrot-Julia set and study the topological inflexibility of distributing of period region in the Mandelbrot set, and finds that Mandelbrot set contain abundant information of structure of Julia sets by founding the whole portray of Julia sets based on Mandelbrot set qualitatively.
Analyzing linear spatial features in ecology.
Buettel, Jessie C; Cole, Andrew; Dickey, John M; Brook, Barry W
2018-06-01
The spatial analysis of dimensionless points (e.g., tree locations on a plot map) is common in ecology, for instance using point-process statistics to detect and compare patterns. However, the treatment of one-dimensional linear features (fiber processes) is rarely attempted. Here we appropriate the methods of vector sums and dot products, used regularly in fields like astrophysics, to analyze a data set of mapped linear features (logs) measured in 12 × 1-ha forest plots. For this demonstrative case study, we ask two deceptively simple questions: do trees tend to fall downhill, and if so, does slope gradient matter? Despite noisy data and many potential confounders, we show clearly that topography (slope direction and steepness) of forest plots does matter to treefall. More generally, these results underscore the value of mathematical methods of physics to problems in the spatial analysis of linear features, and the opportunities that interdisciplinary collaboration provides. This work provides scope for a variety of future ecological analyzes of fiber processes in space. © 2018 by the Ecological Society of America.
Simplified Phase Diversity algorithm based on a first-order Taylor expansion.
Zhang, Dong; Zhang, Xiaobin; Xu, Shuyan; Liu, Nannan; Zhao, Luoxin
2016-10-01
We present a simplified solution to phase diversity when the observed object is a point source. It utilizes an iterative linearization of the point spread function (PSF) at two or more diverse planes by first-order Taylor expansion to reconstruct the initial wavefront. To enhance the influence of the PSF in the defocal plane which is usually very dim compared to that in the focal plane, we build a new model with the Tikhonov regularization function. The new model cannot only increase the computational speed, but also reduce the influence of the noise. By using the PSFs obtained from Zemax, we reconstruct the wavefront of the Hubble Space Telescope (HST) at the edge of the field of view (FOV) when the telescope is in either the nominal state or the misaligned state. We also set up an experiment, which consists of an imaging system and a deformable mirror, to validate the correctness of the presented model. The result shows that the new model can improve the computational speed with high wavefront detection accuracy.
Permanent Monitoring of the Reference Point of the 20m Radio Telescope Wettzell
NASA Technical Reports Server (NTRS)
Neidhardt, Alexander; Losler, Michael; Eschelbach, Cornelia; Schenk, Andreas
2010-01-01
To achieve the goals of the VLBI2010 project and the Global Geodetic Observing System (GGOS), an automated monitoring of the reference points of the various geodetic space techniques, including Very Long Baseline Interferometry (VLBI), is desirable. The resulting permanent monitoring of the local-tie vectors at co-location stations is essential to obtain the sub-millimeter level in the combinations. For this reason a monitoring system was installed at the Geodetic Observatory Wettzell by the Geodetic Institute of the University of Karlsruhe (GIK) to observe the 20m VLBI radio telescope from May to August 2009. A specially developed software from GIK collected data from automated total station measurements, meteorological sensors, and sensors in the telescope monument (e.g., Invar cable data). A real-time visualization directly offered a live view of the measurements during the regular observation operations. Additional scintillometer measurements allowed refraction corrections during the post-processing. This project is one of the first feasibility studies aimed at determining significant deformations of the VLBI antenna due to, for instance, changes in temperature.
NASA Astrophysics Data System (ADS)
Rostworowski, A.
2007-01-01
We adopt Leaver's [E. Leaver, {ITALIC Proc. R. Soc. Lond.} {A402}, 285 (1985)] method to determine quasi normal frequencies of the Schwarzschild black hole in higher (D geq 10) dimensions. In D-dimensional Schwarzschild metric, when D increases, more and more singularities, spaced uniformly on the unit circle |r|=1, approach the horizon at r=rh=1. Thus, a solution satisfying the outgoing wave boundary condition at the horizon must be continued to some mid point and only then the continued fraction condition can be applied. This prescription is general and applies to all cases for which, due to regular singularities on the way from the point of interest to the irregular singularity, Leaver's method in its original setting breaks down. We illustrate the method calculating gravitational vector and tensor quasinormal frequencies of the Schwarzschild black hole in D=11 and D=10 dimensions. We also give the details for the D=9 case, considered in the work of P. Bizoz, T. Chmaj, A. Rostworowski, B.G. Schmidt and Z. Tabor {ITALIC Phys. Rev.}{D72}, 121502(R) (2005) .
Shapes on a plane: Evaluating the impact of projection distortion on spatial binning
Battersby, Sarah E.; Strebe, Daniel “daan”; Finn, Michael P.
2017-01-01
One method for working with large, dense sets of spatial point data is to aggregate the measure of the data into polygonal containers, such as political boundaries, or into regular spatial bins such as triangles, squares, or hexagons. When mapping these aggregations, the map projection must inevitably distort relationships. This distortion can impact the reader’s ability to compare count and density measures across the map. Spatial binning, particularly via hexagons, is becoming a popular technique for displaying aggregate measures of point data sets. Increasingly, we see questionable use of the technique without attendant discussion of its hazards. In this work, we discuss when and why spatial binning works and how mapmakers can better understand the limitations caused by distortion from projecting to the plane. We introduce equations for evaluating distortion’s impact on one common projection (Web Mercator) and discuss how the methods used generalize to other projections. While we focus on hexagonal binning, these same considerations affect spatial bins of any shape, and more generally, any analysis of geographic data performed in planar space.
Exobiology, SETI, von Neumann and geometric phase control.
Hansson, P A
1995-11-01
The central difficulties confronting us at present in exobiology are the problems of the physical forces which sustain three-dimensional organisms, i.e., how one dimensional systems with only nearest interaction and two dimensional ones with its regular vibrations results in an integrated three-dimensional functionality. For example, a human lung has a dimensionality of 2.9 and thus should be measured in m2.9. According to thermodynamics, the first life-like system should have a small number of degrees of freedom, so how can evolution, via cycles of matter, lead to intelligence and theoretical knowledge? Or, more generally, what mechanisms constrain and drive this evolution? We are now on the brink of reaching an understanding below the photon level, into the domain where quantum events implode to the geometric phase which maintains the history of a quantum object. Even if this would exclude point to point communication, it could make it possible to manipulate the molecular level from below, in the physical scale, and result in a new era of geometricised engineering. As such, it would have a significant impact on space exploration and exobiology.
1995-06-06
The crew patch of STS-73, the second flight of the United States Microgravity Laboratory (USML-2), depicts the Space Shuttle Columbia in the vastness of space. In the foreground are the classic regular polyhedrons that were investigated by Plato and later Euclid. The Pythagoreans were also fascinated by the symmetrical three-dimensional objects whose sides are the same regular polygon. The tetrahedron, the cube, the octahedron, and the icosahedron were each associated with the Natural Elements of that time: fire (on this mission represented as combustion science); Earth (crystallography), air and water (fluid physics). An additional icon shown as the infinity symbol was added to further convey the discipline of fluid mechanics. The shape of the emblem represents a fifth polyhedron, a dodecahedron, which the Pythagoreans thought corresponded to a fifth element that represented the cosmos.
NASA Astrophysics Data System (ADS)
Deng, Shuxian; Ge, Xinxin
2017-10-01
Considering the non-Newtonian fluid equation of incompressible porous media, using the properties of operator semigroup and measure space and the principle of squeezed image, Fourier analysis and a priori estimate in the measurement space are used to discuss the non-compressible porous media, the properness of the solution of the equation, its gradual behavior and its topological properties. Through the diffusion regularization method and the compressed limit compact method, we study the overall decay rate of the solution of the equation in a certain space when the initial value is sufficient. The decay estimation of the solution of the incompressible seepage equation is obtained, and the asymptotic behavior of the solution is obtained by using the double regularization model and the Duhamel principle.
NASA Astrophysics Data System (ADS)
Petrie, Gordon; Pevtsov, Alexei; Schwarz, Andrew; DeRosa, Marc
2018-06-01
The solar photospheric magnetic flux distribution is key to structuring the global solar corona and heliosphere. Regular full-disk photospheric magnetogram data are therefore essential to our ability to model and forecast heliospheric phenomena such as space weather. However, our spatio-temporal coverage of the photospheric field is currently limited by our single vantage point at/near Earth. In particular, the polar fields play a leading role in structuring the large-scale corona and heliosphere, but each pole is unobservable for {>} 6 months per year. Here we model the possible effect of full-disk magnetogram data from the Lagrange points L4 and L5, each extending longitude coverage by 60°. Adding data also from the more distant point L3 extends the longitudinal coverage much further. The additional vantage points also improve the visibility of the globally influential polar fields. Using a flux-transport model for the solar photospheric field, we model full-disk observations from Earth/L1, L3, L4, and L5 over a solar cycle, construct synoptic maps using a novel weighting scheme adapted for merging magnetogram data from multiple viewpoints, and compute potential-field models for the global coronal field. Each additional viewpoint brings the maps and models into closer agreement with the reference field from the flux-transport simulation, with particular improvement at polar latitudes, the main source of the fast solar wind.
Lammer, Jan; Prager, Sonja G.; Cheney, Michael C.; Ahmed, Amel; Radwan, Salma H.; Burns, Stephen A.; Silva, Paolo S.; Sun, Jennifer K.
2016-01-01
Purpose To determine whether cone density, spacing, or regularity in eyes with and without diabetes (DM) as assessed by high-resolution adaptive optics scanning laser ophthalmoscopy (AOSLO) correlates with presence of diabetes, diabetic retinopathy (DR) severity, or presence of diabetic macular edema (DME). Methods Participants with type 1 or 2 DM and healthy controls underwent AOSLO imaging of four macular regions. Cone assessment was performed by independent graders for cone density, packing factor (PF), nearest neighbor distance (NND), and Voronoi tile area (VTA). Regularity indices (mean/SD) of NND (RI-NND) and VTA (RI-VTA) were calculated. Results Fifty-three eyes (53 subjects) were assessed. Mean ± SD age was 44 ± 12 years; 81% had DM (duration: 22 ± 13 years; glycated hemoglobin [HbA1c]: 8.0 ± 1.7%; DM type 1: 72%). No significant relationship was found between DM, HbA1c, or DR severity and cone density or spacing parameters. However, decreased regularity of cone arrangement in the macular quadrants was correlated with presence of DM (RI-NND: P = 0.04; RI-VTA: P = 0.04), increasing DR severity (RI-NND: P = 0.04), and presence of DME (RI-VTA: P = 0.04). Eyes with DME were associated with decreased density (P = 0.04), PF (P = 0.03), and RI-VTA (0.04). Conclusions Although absolute cone density and spacing don't appear to change substantially in DM, decreased regularity of the cone arrangement is consistently associated with the presence of DM, increasing DR severity, and DME. Future AOSLO evaluation of cone regularity is warranted to determine whether these changes are correlated with, or predict, anatomic or functional deficits in patients with DM. PMID:27926754
Lammer, Jan; Prager, Sonja G; Cheney, Michael C; Ahmed, Amel; Radwan, Salma H; Burns, Stephen A; Silva, Paolo S; Sun, Jennifer K
2016-12-01
To determine whether cone density, spacing, or regularity in eyes with and without diabetes (DM) as assessed by high-resolution adaptive optics scanning laser ophthalmoscopy (AOSLO) correlates with presence of diabetes, diabetic retinopathy (DR) severity, or presence of diabetic macular edema (DME). Participants with type 1 or 2 DM and healthy controls underwent AOSLO imaging of four macular regions. Cone assessment was performed by independent graders for cone density, packing factor (PF), nearest neighbor distance (NND), and Voronoi tile area (VTA). Regularity indices (mean/SD) of NND (RI-NND) and VTA (RI-VTA) were calculated. Fifty-three eyes (53 subjects) were assessed. Mean ± SD age was 44 ± 12 years; 81% had DM (duration: 22 ± 13 years; glycated hemoglobin [HbA1c]: 8.0 ± 1.7%; DM type 1: 72%). No significant relationship was found between DM, HbA1c, or DR severity and cone density or spacing parameters. However, decreased regularity of cone arrangement in the macular quadrants was correlated with presence of DM (RI-NND: P = 0.04; RI-VTA: P = 0.04), increasing DR severity (RI-NND: P = 0.04), and presence of DME (RI-VTA: P = 0.04). Eyes with DME were associated with decreased density (P = 0.04), PF (P = 0.03), and RI-VTA (0.04). Although absolute cone density and spacing don't appear to change substantially in DM, decreased regularity of the cone arrangement is consistently associated with the presence of DM, increasing DR severity, and DME. Future AOSLO evaluation of cone regularity is warranted to determine whether these changes are correlated with, or predict, anatomic or functional deficits in patients with DM.
... alert and aware of their surroundings. Keep your car in good working order You may think of a car as simply a way to get from Point A to Point B, but cars need regular care to work properly. Make sure ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumagai, Tomo'omi; Mudd, Ryan; Miyazawa, Yoshiyuki
We developed a soil-vegetation-atmosphere transfer (SVAT) model applicable to simulating CO2 and H2O fluxes from the canopies of rubber plantations, which are characterized by distinct canopy clumping produced by regular spacing of plantation trees. Rubber (Hevea brasiliensis Müll. Arg.) plantations, which are rapidly expanding into both climatically optimal and sub-optimal environments throughout mainland Southeast Asia, potentially change the partitioning of water, energy, and carbon at multiple scales, compared with traditional land covers it is replacing. Describing the biosphere-atmosphere exchange in rubber plantations via SVAT modeling is therefore essential to understanding the impacts on environmental processes. The regular spacing of plantationmore » trees creates a peculiar canopy structure that is not well represented in most SVAT models, which generally assumes a non-uniform spacing of vegetation. Herein we develop a SVAT model applicable to rubber plantation and an evaluation method for its canopy structure, and examine how the peculiar canopy structure of rubber plantations affects canopy CO2 and H2O exchanges. Model results are compared with measurements collected at a field site in central Cambodia. Our findings suggest that it is crucial to account for intensive canopy clumping in order to reproduce observed rubber plantation fluxes. These results suggest a potentially optimal spacing of rubber trees to produce high productivity and water use efficiency.« less
Influencing Factors of the Initiation Point in the Parachute-Bomb Dynamic Detonation System
NASA Astrophysics Data System (ADS)
Qizhong, Li; Ye, Wang; Zhongqi, Wang; Chunhua, Bai
2017-12-01
The parachute system has been widely applied in modern armament design, especially for the fuel-air explosives. Because detonation of fuel-air explosives occurs during flight, it is necessary to investigate the influences of the initiation point to ensure successful dynamic detonation. In fact, the initiating position exist the falling area in the fuels, due to the error of influencing factors. In this paper, the major influencing factors of initiation point were explored with airdrop and the regularity between initiation point area and factors were obtained. Based on the regularity, the volume equation of initiation point area was established to predict the range of initiation point in the fuel. The analysis results showed that the initiation point appeared area, scattered on account of the error of attitude angle, secondary initiation charge velocity, and delay time. The attitude angle was the major influencing factors on a horizontal axis. On the contrary, secondary initiation charge velocity and delay time were the major influencing factors on a horizontal axis. Overall, the geometries of initiation point area were sector coupled with the errors of the attitude angle, secondary initiation charge velocity, and delay time.
Jones, Malia; Pebley, Anne R
2014-06-01
Research on neighborhood effects has focused largely on residential neighborhoods, but people are exposed to many other places in the course of their daily lives-at school, at work, when shopping, and so on. Thus, studies of residential neighborhoods consider only a subset of the social-spatial environment affecting individuals. In this article, we examine the characteristics of adults' "activity spaces"-spaces defined by locations that individuals visit regularly-in Los Angeles County, California. Using geographic information system (GIS) methods, we define activity spaces in two ways and estimate their socioeconomic characteristics. Our research has two goals. First, we determine whether residential neighborhoods represent the social conditions to which adults are exposed in the course of their regular activities. Second, we evaluate whether particular groups are exposed to a broader or narrower range of social contexts in the course of their daily activities. We find that activity spaces are substantially more heterogeneous in terms of key social characteristics, compared to residential neighborhoods. However, the characteristics of both home neighborhoods and activity spaces are closely associated with individual characteristics. Our results suggest that most people experience substantial segregation across the range of spaces in their daily lives, not just at home.
NASA Astrophysics Data System (ADS)
Bassrei, A.; Terra, F. A.; Santos, E. T.
2007-12-01
Inverse problems in Applied Geophysics are usually ill-posed. One way to reduce such deficiency is through derivative matrices, which are a particular case of a more general family that receive the name regularization. The regularization by derivative matrices has an input parameter called regularization parameter, which choice is already a problem. It was suggested in the 1970's a heuristic approach later called L-curve, with the purpose to provide the optimum regularization parameter. The L-curve is a parametric curve, where each point is associated to a λ parameter. In the horizontal axis one represents the error between the observed data and the calculated one and in the vertical axis one represents the product between the regularization matrix and the estimated model. The ideal point is the L-curve knee, where there is a balance between the quantities represented in the Cartesian axes. The L-curve has been applied to a variety of inverse problems, also in Geophysics. However, the visualization of the knee is not always an easy task, in special when the L-curve does not the L shape. In this work three methodologies are employed for the search and obtainment of the optimal regularization parameter from the L curve. The first criterion is the utilization of Hansen's tool box which extracts λ automatically. The second criterion consists in to extract visually the optimal parameter. By third criterion one understands the construction of the first derivative of the L-curve, and the posterior automatic extraction of the inflexion point. The utilization of the L-curve with the three above criteria were applied and validated in traveltime tomography and 2-D gravity inversion. After many simulations with synthetic data, noise- free as well as data corrupted with noise, with the regularization orders 0, 1, and 2, we verified that the three criteria are valid and provide satisfactory results. The third criterion presented the best performance, specially in cases where the L-curve has an irregular shape.
NASA Astrophysics Data System (ADS)
Wichmann, Andreas; Kada, Martin
2016-06-01
There are many applications for 3D city models, e.g., in visualizations, analysis, and simulations; each one requiring a certain level of detail to be effective. The overall trend goes towards including various kinds of anthropogenic and natural objects therein with ever increasing geometric and semantic details. A few years back, the featured 3D building models had only coarse roof geometry. But nowadays, they are expected to include detailed roof superstructures like dormers and chimneys. Several methods have been proposed for the automatic reconstruction of 3D building models from airborne based point clouds. However, they are usually unable to reliably recognize and reconstruct small roof superstructures as these objects are often represented by only few point measurements, especially in low-density point clouds. In this paper, we propose a recognition and reconstruction approach that overcomes this problem by identifying and simultaneously reconstructing regularized superstructures of similar shape. For this purpose, candidate areas for superstructures are detected by taking into account virtual sub-surface points that are assumed to lie on the main roof faces below the measured points. The areas with similar superstructures are detected, extracted, grouped together, and registered to one another with the Iterative Closest Point (ICP) algorithm. As an outcome, the joint point density of each detected group is increased, which helps to recognize the shape of the superstructure more reliably and in more detail. Finally, all instances of each group of superstructures are modeled at once and transformed back to their original position. Because superstructures are reconstructed in groups, symmetries, alignments, and regularities can be enforced in a straight-forward way. The validity of the approach is presented on a number of example buildings from the Vaihingen test data set.
Vacuum polarization and classical self-action near higher-dimensional defects
NASA Astrophysics Data System (ADS)
Grats, Yuri V.; Spirin, Pavel
2017-02-01
We analyze the gravity-induced effects associated with a massless scalar field in a higher-dimensional spacetime being the tensor product of (d-n)-dimensional Minkowski space and n-dimensional spherically/cylindrically symmetric space with a solid/planar angle deficit. These spacetimes are considered as simple models for a multidimensional global monopole (if n≥slant 3) or cosmic string (if n=2) with (d-n-1) flat extra dimensions. Thus, we refer to them as conical backgrounds. In terms of the angular-deficit value, we derive the perturbative expression for the scalar Green function, valid for any d≥slant 3 and 2≤slant n≤slant d-1, and compute it to the leading order. With the use of this Green function we compute the renormalized vacuum expectation value of the field square {< φ {2}(x)rangle }_{ren} and the renormalized vacuum averaged of the scalar-field energy-momentum tensor {< T_{M N}(x)rangle }_{ren} for arbitrary d and n from the interval mentioned above and arbitrary coupling constant to the curvature ξ . In particular, we revisit the computation of the vacuum polarization effects for a non-minimally coupled massless scalar field in the spacetime of a straight cosmic string. The same Green function enables to consider the old purely classical problem of the gravity-induced self-action of a classical point-like scalar or electric charge, placed at rest at some fixed point of the space under consideration. To deal with divergences, which appear in consideration of the two problems, we apply the dimensional-regularization technique, widely used in quantum field theory. The explicit dependence of the results upon the dimensionalities of both the bulk and conical submanifold is discussed.
An improved numerical method for the kernel density functional estimation of disperse flow
NASA Astrophysics Data System (ADS)
Smith, Timothy; Ranjan, Reetesh; Pantano, Carlos
2014-11-01
We present an improved numerical method to solve the transport equation for the one-point particle density function (pdf), which can be used to model disperse flows. The transport equation, a hyperbolic partial differential equation (PDE) with a source term, is derived from the Lagrangian equations for a dilute particle system by treating position and velocity as state-space variables. The method approximates the pdf by a discrete mixture of kernel density functions (KDFs) with space and time varying parameters and performs a global Rayleigh-Ritz like least-square minimization on the state-space of velocity. Such an approximation leads to a hyperbolic system of PDEs for the KDF parameters that cannot be written completely in conservation form. This system is solved using a numerical method that is path-consistent, according to the theory of non-conservative hyperbolic equations. The resulting formulation is a Roe-like update that utilizes the local eigensystem information of the linearized system of PDEs. We will present the formulation of the base method, its higher-order extension and further regularization to demonstrate that the method can predict statistics of disperse flows in an accurate, consistent and efficient manner. This project was funded by NSF Project NSF-DMS 1318161.
Feature Relevance Assessment of Multispectral Airborne LIDAR Data for Tree Species Classification
NASA Astrophysics Data System (ADS)
Amiri, N.; Heurich, M.; Krzystek, P.; Skidmore, A. K.
2018-04-01
The presented experiment investigates the potential of Multispectral Laser Scanning (MLS) point clouds for single tree species classification. The basic idea is to simulate a MLS sensor by combining two different Lidar sensors providing three different wavelngthes. The available data were acquired in the summer 2016 at the same date in a leaf-on condition with an average point density of 37 points/m2. For the purpose of classification, we segmented the combined 3D point clouds consisiting of three different spectral channels into 3D clusters using Normalized Cut segmentation approach. Then, we extracted four group of features from the 3D point cloud space. Once a varity of features has been extracted, we applied forward stepwise feature selection in order to reduce the number of irrelevant or redundant features. For the classification, we used multinomial logestic regression with L1 regularization. Our study is conducted using 586 ground measured single trees from 20 sample plots in the Bavarian Forest National Park, in Germany. Due to lack of reference data for some rare species, we focused on four classes of species. The results show an improvement between 4-10 pp for the tree species classification by using MLS data in comparison to a single wavelength based approach. A cross validated (15-fold) accuracy of 0.75 can be achieved when all feature sets from three different spectral channels are used. Our results cleary indicates that the use of MLS point clouds has great potential to improve detailed forest species mapping.
13-Moment System with Global Hyperbolicity for Quantum Gas
NASA Astrophysics Data System (ADS)
Di, Yana; Fan, Yuwei; Li, Ruo
2017-06-01
We point out that the quantum Grad's 13-moment system (Yano in Physica A 416:231-241, 2014) is lack of global hyperbolicity, and even worse, the thermodynamic equilibrium is not an interior point of the hyperbolicity region of the system. To remedy this problem, by fully considering Grad's expansion, we split the expansion into the equilibrium part and the non-equilibrium part, and propose a regularization for the system with the help of the new hyperbolic regularization theory developed in Cai et al. (SIAM J Appl Math 75(5):2001-2023, 2015) and Fan et al. (J Stat Phys 162(2):457-486, 2016). This provides us a new model which is hyperbolic for all admissible thermodynamic states, and meanwhile preserves the approximate accuracy of the original system. It should be noted that this procedure is not a trivial application of the hyperbolic regularization theory.
Thermodynamics and glassy phase transition of regular black holes
NASA Astrophysics Data System (ADS)
Javed, Wajiha; Yousaf, Z.; Akhtar, Zunaira
2018-05-01
This paper is aimed to study thermodynamical properties of phase transition for regular charged black holes (BHs). In this context, we have considered two different forms of BH metrics supplemented with exponential and logistic distribution functions and investigated the recent expansion of phase transition through grand canonical ensemble. After exploring the corresponding Ehrenfest’s equation, we found the second-order background of phase transition at critical points. In order to check the critical behavior of regular BHs, we have evaluated some corresponding explicit relations for the critical temperature, pressure and volume and draw certain graphs with constant values of Smarr’s mass. We found that for the BH metric with exponential configuration function, the phase transition curves are divergent near the critical points, while glassy phase transition has been observed for the Ayón-Beato-García-Bronnikov (ABGB) BH in n = 5 dimensions.
NASA Astrophysics Data System (ADS)
Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain
2017-11-01
The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.
A new scoring method for evaluating the performance of earthquake forecasts and predictions
NASA Astrophysics Data System (ADS)
Zhuang, J.
2009-12-01
This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
46 CFR 177.500 - Means of escape.
Code of Federal Regulations, 2012 CFR
2012-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
46 CFR 177.500 - Means of escape.
Code of Federal Regulations, 2013 CFR
2013-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
46 CFR 177.500 - Means of escape.
Code of Federal Regulations, 2014 CFR
2014-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
46 CFR 177.500 - Means of escape.
Code of Federal Regulations, 2011 CFR
2011-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
46 CFR 116.500 - Means of escape.
Code of Federal Regulations, 2010 CFR
2010-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
46 CFR 116.500 - Means of escape.
Code of Federal Regulations, 2014 CFR
2014-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
46 CFR 116.500 - Means of escape.
Code of Federal Regulations, 2011 CFR
2011-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
46 CFR 116.500 - Means of escape.
Code of Federal Regulations, 2012 CFR
2012-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
46 CFR 116.500 - Means of escape.
Code of Federal Regulations, 2013 CFR
2013-10-01
... this section, each space accessible to passengers or used by the crew on a regular basis, must have at... escape must be widely separated and, if possible, at opposite ends or sides of the space to minimize the... windows. (d) The number and dimensions of the means of escape from each space must be sufficient for rapid...
ERIC Educational Resources Information Center
Reynolds, Thomas D.; And Others
This compilation of 138 problems illustrating applications of high school mathematics to various aspects of space science is intended as a resource from which the teacher may select questions to supplement his regular course. None of the problems require a knowledge of calculus or physics, and solutions are presented along with the problem…
Clinical implementation of stereotaxic brain implant optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenow, U.F.; Wojcicka, J.B.
1991-03-01
This optimization method for stereotaxic brain implants is based on seed/strand configurations of the basic type developed for the National Cancer Institute (NCI) atlas of regular brain implants. Irregular target volume shapes are determined from delineation in a stack of contrast enhanced computed tomography scans. The neurosurgeon may then select up to ten directions, or entry points, of surgical approach of which the program finds the optimal one under the criterion of smallest target volume diameter. Target volume cross sections are then reconstructed in 5-mm-spaced planes perpendicular to the implantation direction defined by the entry point and the target volumemore » center. This information is used to define a closed line in an implant cross section along which peripheral seed strands are positioned and which has now an irregular shape. Optimization points are defined opposite peripheral seeds on the target volume surface to which the treatment dose rate is prescribed. Three different optimization algorithms are available: linear least-squares programming, quadratic programming with constraints, and a simplex method. The optimization routine is implemented into a commercial treatment planning system. It generates coordinate and source strength information of the optimized seed configurations for further dose rate distribution calculation with the treatment planning system, and also the coordinate settings for the stereotaxic Brown-Roberts-Wells (BRW) implantation device.« less
The research on calibration methods of dual-CCD laser three-dimensional human face scanning system
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong
2013-09-01
In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.
Rafferty, Miriam R; Schmidt, Peter N; Luo, Sheng T; Li, Kan; Marras, Connie; Davis, Thomas L; Guttman, Mark; Cubillos, Fernando; Simuni, Tanya
2017-01-01
Research-based exercise interventions improve health-related quality of life (HRQL) and mobility in people with Parkinson's disease (PD). To examine whether exercise habits were associated with changes in HRQL and mobility over two years. We identified a cohort of National Parkinson Foundation Quality Improvement Initiative (NPF-QII) participants with three visits. HRQL and mobility were measured with the Parkinson's Disease Questionnaire (PDQ-39) and Timed Up and Go (TUG). We compared self-reported regular exercisers (≥2.5 hours/week) with people who did not exercise 2.5 hours/week. Then we quantified changes in HRQL and mobility associated with 30-minute increases in exercise, across PD severity, using mixed effects regression models. Participants with three observational study visits (n = 3408) were younger, with milder PD, than participants with fewer visits. After 2 years, consistent exercisers and people who started to exercise regularly after their baseline visit had smaller declines in HRQL and mobility than non-exercisers (p < 0.05). Non-exercisers worsened by 1.37 points on the PDQ-39 and a 0.47 seconds on the TUG per year. Increasing exercise by 30 minutes/week was associated with slower declines in HRQL (-0.16 points) and mobility (-0.04 sec). The benefit of exercise on HRQL was greater in advanced PD (-0.41 points) than mild PD (-0.14 points; p < 0.02). Consistently exercising and starting regular exercise after baseline were associated with small but significant positive effects on HRQL and mobility changes over two years. The greater association of exercise with HRQL in advanced PD supports improving encouragement and facilitation of exercise in advanced PD.
Rafferty, Miriam R.; Schmidt, Peter N.; Luo, Sheng T.; Li, Kan; Marras, Connie; Davis, Thomas L.; Guttman, Mark; Cubillos, Fernando; Simuni, Tanya
2017-01-01
Background Research-based exercise interventions improve health-related quality of life (HRQL) and mobility in people with Parkinson’s disease (PD). Objective To examine whether exercise habits were associated with changes in HRQL and mobility over two years. Methods We identified a cohort of National Parkinson Foundation Quality Improvement Initiative (NPF-QII) participants with three visits. HRQL and mobility were measured with the Parkinson’s Disease Questionnaire (PDQ-39) and Timed Up and Go (TUG). We compared self-reported regular exercisers (≥2.5 hours/week) with people who did not exercise 2.5 hours/week. Then we quantified changes in HRQL and mobility associated with 30-minute increases in exercise, across PD severity, using mixed effects regression models. Results Participants with three observational study visits (n = 3408) were younger, with milder PD, than participants with fewer visits. After 2 years, consistent exercisers and people who started to exercise regularly after their baseline visit had smaller declines in HRQL and mobility than non-exercisers (p < 0.05). Non-exercisers worsened by 1.37 points on the PDQ-39 and a 0.47 seconds on the TUG per year. Increasing exercise by 30 minutes/week was associated with slower declines in HRQL (−0.16 points) and mobility (−0.04 sec). The benefit of exercise on HRQL was greater in advanced PD (−0.41 points) than mild PD (−0.14 points; p < 0.02). Conclusions Consistently exercising and starting regular exercise after baseline were associated with small but significant positive effects on HRQL and mobility changes over two years. The greater association of exercise with HRQL in advanced PD supports improving encouragement and facilitation of exercise in advanced PD. PMID:27858719
NASA Astrophysics Data System (ADS)
Susyanto, Nanang
2017-12-01
We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.
Density of convex intersections and applications
Rautenberg, C. N.; Rösel, S.
2017-01-01
In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301
Analysis of dynamically stable patterns in a maze-like corridor using the Wasserstein metric.
Ishiwata, Ryosuke; Kinukawa, Ryota; Sugiyama, Yuki
2018-04-23
The two-dimensional optimal velocity (2d-OV) model represents a dissipative system with asymmetric interactions, thus being suitable to reproduce behaviours such as pedestrian dynamics and the collective motion of living organisms. In this study, we found that particles in the 2d-OV model form optimal patterns in a maze-like corridor. Then, we estimated the stability of such patterns using the Wasserstein metric. Furthermore, we mapped these patterns into the Wasserstein metric space and represented them as points in a plane. As a result, we discovered that the stability of the dynamical patterns is strongly affected by the model sensitivity, which controls the motion of each particle. In addition, we verified the existence of two stable macroscopic patterns which were cohesive, stable, and appeared regularly over the time evolution of the model.
Very highly excited vibrational states of LiCN using a discrete variable representation
NASA Astrophysics Data System (ADS)
Henderson, James R.; Tennyson, Jonathan
Calculations are presented for the lowest 900 vibrational (J = 0) states of the LiCN floppy system for a two dimensional potential energy surface (rCN frozen). Most of these states lie well above the barrier separating the two linear isomers of the molecule and the point where the classical dynamics of the system becomes chaotic. Analysis of the wavefunctions of individual states in the high energy region shows that while most have an irregular nodal structure, a significant number of states appear regular - corresponding to solutions of standard, 'mode localized' hamiltonians. Motions corresponding in zero-order to Li-CN and Li-NC normal modes as well as free rotor states are identified. The distribution of level spacings is also studied and yields results in good agreement with those obtained by analysing nodal structures.
A lattice approach to spinorial quantum gravity
NASA Technical Reports Server (NTRS)
Renteln, Paul; Smolin, Lee
1989-01-01
A new lattice regularization of quantum general relativity based on Ashtekar's reformulation of Hamiltonian general relativity is presented. In this form, quantum states of the gravitational field are represented within the physical Hilbert space of a Kogut-Susskind lattice gauge theory. The gauge field of the theory is a complexified SU(2) connection which is the gravitational connection for left-handed spinor fields. The physical states of the gravitational field are those which are annihilated by additional constraints which correspond to the four constraints of general relativity. Lattice versions of these constraints are constructed. Those corresponding to the three-dimensional diffeomorphism generators move states associated with Wilson loops around on the lattice. The lattice Hamiltonian constraint has a simple form, and a correspondingly simple interpretation: it is an operator which cuts and joins Wilson loops at points of intersection.
14 CFR 1259.101 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... definitions shall apply: (a) Field related to space means any academic discipline or field of study (including the physical, natural and biological sciences, and engineering, space technology, education, economics...) Institution of higher education means any college or university in any State which: (1) Admits as regular...
Global Regularity of 2D Density Patches for Inhomogeneous Navier-Stokes
NASA Astrophysics Data System (ADS)
Gancedo, Francisco; García-Juárez, Eduardo
2018-07-01
This paper is about Lions' open problem on density patches (Lions in Mathematical topics in fluid mechanics. Vol. 1, volume 3 of Oxford Lecture series in mathematics and its applications, Clarendon Press, Oxford University Press, New York, 1996): whether or not inhomogeneous incompressible Navier-Stokes equations preserve the initial regularity of the free boundary given by density patches. Using classical Sobolev spaces for the velocity, we first establish the propagation of {C^{1+γ}} regularity with {0 < γ < 1} in the case of positive density. Furthermore, we go beyond this to show the persistence of a geometrical quantity such as the curvature. In addition, we obtain a proof for {C^{2+γ}} regularity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paparó, M.; Benkő, J. M.; Hareter, M.
A sequence search method was developed to search the regular frequency spacing in δ Scuti stars through visual inspection and an algorithmic search. We searched for sequences of quasi-equally spaced frequencies, containing at least four members per sequence, in 90 δ Scuti stars observed by CoRoT . We found an unexpectedly large number of independent series of regular frequency spacing in 77 δ Scuti stars (from one to eight sequences) in the non-asymptotic regime. We introduce the sequence search method presenting the sequences and echelle diagram of CoRoT 102675756 and the structure of the algorithmic search. Four sequences (echelle ridges)more » were found in the 5–21 d{sup −1} region where the pairs of the sequences are shifted (between 0.5 and 0.59 d{sup −1}) by twice the value of the estimated rotational splitting frequency (0.269 d{sup −1}). The general conclusions for the whole sample are also presented in this paper. The statistics of the spacings derived by the sequence search method, by FT (Fourier transform of the frequencies), and the statistics of the shifts are also compared. In many stars more than one almost equally valid spacing appeared. The model frequencies of FG Vir and their rotationally split components were used to formulate the possible explanation that one spacing is the large separation while the other is the sum of the large separation and the rotational frequency. In CoRoT 102675756, the two spacings (2.249 and 1.977 d{sup −1}) are in better agreement with the sum of a possible 1.710 d{sup −1} large separation and two or one times, respectively, the value of the rotational frequency.« less
NASA Technical Reports Server (NTRS)
Burris, John; McGee, Thomas; Hoegy, Walt; Newman, Paul; Lait, Leslie; Twigg, Laurence; Sumnicht, Grant; Heaps, William; Hostetler, Chris; Neuber, Roland;
2001-01-01
NASA Goddard Space Flight Center's Airborne Raman Ozone, Temperature and Aerosol Lidar (AROTEL) measured extremely cold temperatures during all three deployments (December 1-16, 1999, January 14-29, 2000 and February 27-March 15, 2000) of the Sage III Ozone Loss and Validation Experiment (SOLVE). Temperatures were significantly below values observed in previous years with large regions regularly below 191 K and frequent temperature retrievals yielding values at or below 187 K. Temperatures well below the saturation point of type I polar stratospheric clouds (PSCs) were regularly encountered but their presence was not well correlated with PSCs observed by the NASA Langley Research Center's Aerosol Lidar co-located with AROTEL. Temperature measurements by meteorological sondes launched within areas traversed by the DC-8 showed minimum temperatures consistent in time and vertical extent with those derived from AROTEL data. Calculations to establish whether PSCs could exist at measured AROTEL temperatures and observed mixing ratios of nitric acid and water vapor showed large regions favorable to PSC formation. On several occasions measured AROTEL temperatures up to 10 K below the NAT saturation temperature were insufficient to produce PSCs even though measured values of nitric acid and water were sufficient for their formation.
Galaxy redshift surveys with sparse sampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro
2013-12-01
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V{sub survey} ∼ 10Gpc{sup 3}) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V{sub survey}, we observe only a fraction of the volume. The distribution of observed regions should bemore » chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V{sub survey} (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V{sub survey} (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.« less
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
Well-posedness of characteristic symmetric hyperbolic systems
NASA Astrophysics Data System (ADS)
Secchi, Paolo
1996-06-01
We consider the initial-boundary-value problem for quasi-linear symmetric hyperbolic systems with characteristic boundary of constant multiplicity. We show the well-posedness in Hadamard's sense (i.e., existence, uniqueness and continuous dependence of solutions on the data) of regular solutions in suitable functions spaces which take into account the loss of regularity in the normal direction to the characteristic boundary.
ERIC Educational Resources Information Center
Turk-Browne, Nicholas B.; Scholl, Brian J.; Chun, Marvin M.; Johnson, Marcia K.
2009-01-01
Our environment contains regularities distributed in space and time that can be detected by way of statistical learning. This unsupervised learning occurs without intent or awareness, but little is known about how it relates to other types of learning, how it affects perceptual processing, and how quickly it can occur. Here we use fMRI during…
Piezoelectric devices for vibration suppression: Modeling and application to a truss structure
NASA Technical Reports Server (NTRS)
Won, Chin C.; Sparks, Dean W., Jr.; Belvin, W. Keith; Sulla, Jeff L.
1993-01-01
For a space structure assembled from truss members, an effective way to control the structure may be to replace the regular truss elements by active members. The active members play the role of load carrying elements as well as actuators. A piezo strut, made of a stack of piezoceramics, may be an ideal active member to be integrated into a truss space structure. An electrically driven piezo strut generates a pair of forces, and is considered as a two-point actuator in contrast to a one-point actuator such as a thruster or a shaker. To achieve good structural vibration control, sensing signals compatible to the control actuators are desirable. A strain gage or a piezo film with proper signal conditioning to measure member strain or strain rate, respectively, are ideal control sensors for use with a piezo actuator. The Phase 0 CSI Evolutionary Model (CEM) at NASA Langley Research Center used cold air thrusters as actuators to control both rigid body motions and flexible body vibrations. For the Phase 1 and 2 CEM, it is proposed to use piezo struts to control the flexible modes and thrusters to control the rigid body modes. A tenbay truss structure with active piezo struts is built to study the modeling, controller designs, and experimental issues. In this paper, the tenbay structure with piezo active members is modelled using an energy method approach. Decentralized and centralized control schemes are designed and implemented, and preliminary analytical and experimental results are presented.
Making Carbon-Nanotube Arrays Using Block Copolymers: Part 2
NASA Technical Reports Server (NTRS)
Bronikowski, Michael
2004-01-01
Some changes have been incorporated into a proposed method of manufacturing regular arrays of precisely sized, shaped, positioned, and oriented carbon nanotubes. Such arrays could be useful as mechanical resonators for signal filters and oscillators, and as electrophoretic filters for use in biochemical assays. A prior version of the method was described in Block Copolymers as Templates for Arrays of Carbon Nanotubes, (NPO-30240), NASA Tech Briefs, Vol. 27, No. 4 (April 2003), page 56. To recapitulate from that article: As in other previously reported methods, carbon nanotubes would be formed by decomposition of carbon-containing gases over nanometer-sized catalytic metal particles that had been deposited on suitable substrates. Unlike in other previously reported methods, the catalytic metal particles would not be so randomly and densely distributed as to give rise to thick, irregular mats of nanotubes with a variety of lengths, diameters, and orientations. Instead, in order to obtain regular arrays of spaced-apart carbon nanotubes as nearly identical as possible, the catalytic metal particles would be formed in predetermined regular patterns with precise spacings. The regularity of the arrays would be ensured by the use of nanostructured templates made of block copolymers.
Soap Films and GeoGebra in the Study of Fermat and Steiner Points
ERIC Educational Resources Information Center
Flores, Alfinio; Park, Jungeun
2018-01-01
We discuss how mathematics and secondary mathematics education majors developed an understanding of Fermat points for the triangle as well as Steiner points for the square and regular pentagon, and also of soap film configurations between parallel plates where forces are in equilibrium. The activities included the use of soap films and the…
7 CFR 1955.120 - Payment of points (housing).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 14 2010-01-01 2009-01-01 true Payment of points (housing). 1955.120 Section 1955.120... Property § 1955.120 Payment of points (housing). To effect regular sale of inventory SFH property to a purchaser who is financing the purchase of the property with a non-FmHA or its successor agency under Public...
DOT National Transportation Integrated Search
2007-08-30
Although the current crew rest and duty restrictions for commercial space transportation remain in place, the Federal Aviation Administration (FAA) continues to review the regulation on a regular basis for validity and efficacy based on input from sc...
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M.
2016-12-01
Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.
On a model of electromagnetic field propagation in ferroelectric media
NASA Astrophysics Data System (ADS)
Picard, Rainer
2007-04-01
The Maxwell system in an anisotropic, inhomogeneous medium with non-linear memory effect produced by a Maxwell type system for the polarization is investigated under low regularity assumptions on data and domain. The particular form of memory in the system is motivated by a model for electromagnetic wave propagation in ferromagnetic materials suggested by Greenberg, MacCamy and Coffman [J.M. Greenberg, R.C. MacCamy, C.V. Coffman, On the long-time behavior of ferroelectric systems, Phys. D 134 (1999) 362-383]. To avoid unnecessary regularity requirements the problem is approached as a system of space-time operator equation in the framework of extrapolation spaces (Sobolev lattices), a theoretical framework developed in [R. Picard, Evolution equations as space-time operator equations, Math. Anal. Appl. 173 (2) (1993) 436-458; R. Picard, Evolution equations as operator equations in lattices of Hilbert spaces, Glasnik Mat. 35 (2000) 111-136]. A solution theory for a large class of ferromagnetic materials confined to an arbitrary open set (with suitably generalized boundary conditions) is obtained.
Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei
2014-12-01
Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.
Astrophysical payload accommodation on the space station
NASA Technical Reports Server (NTRS)
Woods, B. P.
1985-01-01
Surveys of potential space station astrophysics payload requirements and existing point mount design concepts were performed to identify potential design approaches for accommodating astrophysics instruments from space station. Most existing instrument pointing systems were designed for operation from the space shuttle and it is unlikely that they will sustain their performance requirements when exposed to the space station disturbance environment. The technology exists or is becoming available so that precision pointing can be provided from the space station manned core. Development of a disturbance insensitive pointing mount is the key to providing a generic system for space station. It is recommended that the MSFC Suspended Experiment Mount concept be investigated for use as part of a generic pointing mount for space station. Availability of a shirtsleeve module for instrument change out, maintenance and repair is desirable from the user's point of view. Addition of a shirtsleeve module on space station would require a major program commitment.
ERIC Educational Resources Information Center
Jackson, Christa; Wilhelm, Jennifer Anne; Lamar, Mary; Cole, Merryn
2015-01-01
This study investigated sixth-grade middle-level students' geometric spatial development by gender and race within and between control and experimental groups at two middle schools as they participated in an Earth/Space unit. The control group utilized a regular Earth/Space curriculum and the experimental group used a National Aeronautics and…
Statistics of natural reverberation enable perceptual separation of sound and space
Traer, James; McDermott, Josh H.
2016-01-01
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730
Statistics of natural reverberation enable perceptual separation of sound and space.
Traer, James; McDermott, Josh H
2016-11-29
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.
NASA Astrophysics Data System (ADS)
Schöpfer, Martin; Lehner, Florian; Grasemann, Bernhard; Kaserer, Klemens; Hinsch, Ralph
2017-04-01
John G. Ramsay's sketch of structures developed in a layer progressively folded and deformed by tangential longitudinal strain (Figure 7-65 in Folding and Fracturing of Rocks) and the associated strain pattern analysis have been reproduced in many monographs on Structural Geology and are referred to in numerous publications. Although the origin of outer-arc extension fractures is well-understood and documented in many natural examples, geomechanical factors controlling their (finite or saturation) spacing are hitherto unexplored. This study investigates the formation of bending-induced fractures during constant-curvature forced folding using Distinct Element Method (DEM) numerical modelling. The DEM model comprises a central brittle layer embedded within weaker (low modulus) elastic layers; the layer interfaces are frictionless (free slip). Folding of this three-layer system is enforced by a velocity boundary condition at the model base, while a constant overburden pressure is maintained at the model top. The models illustrate several key stages of fracture array development: (i) Prior to the onset of fracture, the neutral surface is located midway between the layer boundaries; (ii) A first set of regularly spaced fractures develops once the tensile stress in the outer-arc equals the tensile strength of the layer. Since the layer boundaries are frictionless, these bending-induced fractures propagate through the entire layer; (iii) After the appearance of the first fracture set, the rate of fracture formation decreases rapidly and so-called infill fractures develop approximately midway between two existing fractures (sequential infilling); (iv) Eventually no new fractures form, irrespective of any further increase in fold curvature (fracture saturation). Analysis of the interfacial normal stress distributions suggests that at saturation the fracture-bound blocks are subjected to a loading condition similar to three-point bending. Using classical beam theory an analytical solution is derived for the critical fracture spacing, i.e. the spacing below which the maximum tensile stress cannot reach the layer strength. The model results are consistent with an approximate analytical solution, and illustrate that the spacing of bending-induced fractures is proportional to layer thickness and a square root function of the ratio of layer tensile strength to confining pressure. Although highly idealised, models and analysis presented in this study offer an explanation for fracture saturation during folding and point towards certain key factors that may control fracture spacing in natural systems.
Manned Space Flight Experiments Symposium: Gemini Missions III and IV
NASA Technical Reports Server (NTRS)
1965-01-01
This is a compilation of papers on in-flight experiments presented at the first symposium of a series, Manned Space Flight Experiments Symposium, sponsored by the National Aeronautics and Space Administration. The results of experiments conducted during the Gemini Missions III and IV are covered. These symposiums are to be conducted for the scientific community at regular intervals on the results of experiments carried out in conjunction with manned space flights.
NASA Astrophysics Data System (ADS)
Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé
2006-06-01
An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.
Mencière, Maxime L; Epinette, Jean-Alain; Gabrion, Antoine; Arnalsteen, Damien; Mertl, Patrice
2014-10-01
A full range of motion after total knee arthroplasty has become more and more requested by our patients, leading to novel designs of knee implants, the so-called "hyperflex" knees. The aim of the present study was to confirm whether or not hyperflexion of operated knees really improves the patients' quality of life. A retrospective comparative case-control study has been carried out to compare clinical results shown in two types of knee prosthesis, from two homogeneous paired groups of patients including 45 cases of a "hyperflex" model (RP-F), while the control group consisted of 43 cases of a "regular design" model (Triathlon) in terms of expected postoperative flexion. The hyperflex group demonstrated significant higher mean values of passive flexion at 119.9° in the RP-F group versus 111.1° in the Triathlon group. However, global results in the "regular" control group were significantly better than the "hyperflex" study group, in both IKS knee and functional scores at 84.4 points (RP-F) vs. 89.8 points (Triathlon), and 84.6 points (RP-F) vs. 89.5 points (Triathlon), respectively. Moreover, the self-administered KOOS questionnaire was significantly in favor of the control group, with 73.5 points in RP-F knees versus 86.0 points for Triathlon knees at global KOOS postoperative scores. The quality of life of operated patients after TKA obviously would be considered as the main priority, which was better obtained by a "regular design" in our study. Hence "high flexion" cannot be considered as an absolute target when choosing a model for total knee arthroplasty.
NASA Astrophysics Data System (ADS)
Ori, Amos
2016-01-01
Almheiri, Marolf, Polchinski, and Sully pointed out that for a sufficiently old black hole (BH), the set of assumptions known as the complementarity postulates appears to be inconsistent with the assumption of local regularity at the horizon. They concluded that the horizon of an old BH is likely to be the locus of local irregularity, a "firewall". Here I point out that if one adopts a different assumption, namely that semiclassical physics holds throughout its anticipated domain of validity, then the inconsistency is avoided, and the horizon retains its regularity. In this alternative view-point, the vast portion of the original BH information remains trapped inside the BH throughout the semiclassical domain of evaporation, and possibly leaks out later on. This appears to be an inevitable outcome of semiclassical gravity (if assumed to apply throughout its anticipated domain of validity).
Suggestions for Accommodating the Crippled in Regular Buildings.
ERIC Educational Resources Information Center
Michigan State Board of Education, Lansing.
Architectural guideline specifications are given for--(1) doors, (2) floors, (3) toilet rooms, and (4) water fountains. Suggestions for area locations and capabilities are given for--(1) classrooms, (2) playgrounds, (3) auditoriums, (4) physical and/or occupational therapy, (5) storage space, and (6) resting space. (MH)
NASA Astrophysics Data System (ADS)
Norsk, P.; Simonsen, L. C.; Alwood, J.
2018-02-01
Investigations of mammalian cell cultures as well as organs-on-chips will be done from the Deep Space Gateway by telemetry. Cells will be monitored regularly for metabolic activity, growth, and viability, and results compared to ground control data.
34 CFR 643.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2012 CFR
2012-07-01
... school enrollment of participants. (3) (3 points) Secondary school graduation (regular secondary school diploma). Whether the applicant met or exceeded its objective regarding the graduation of participants... standard number of years. (4) (1.5 points) Secondary school graduation (rigorous secondary school program...
34 CFR 643.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2013 CFR
2013-07-01
... school enrollment of participants. (3) (3 points) Secondary school graduation (regular secondary school diploma). Whether the applicant met or exceeded its objective regarding the graduation of participants... standard number of years. (4) (1.5 points) Secondary school graduation (rigorous secondary school program...
34 CFR 643.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2011 CFR
2011-07-01
... school enrollment of participants. (3) (3 points) Secondary school graduation (regular secondary school diploma). Whether the applicant met or exceeded its objective regarding the graduation of participants... standard number of years. (4) (1.5 points) Secondary school graduation (rigorous secondary school program...
34 CFR 643.22 - How does the Secretary evaluate prior experience?
Code of Federal Regulations, 2014 CFR
2014-07-01
... school enrollment of participants. (3) (3 points) Secondary school graduation (regular secondary school diploma). Whether the applicant met or exceeded its objective regarding the graduation of participants... standard number of years. (4) (1.5 points) Secondary school graduation (rigorous secondary school program...
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
Efficient Delaunay Tessellation through K-D Tree Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less
Curvature of Super Diff(S/sup 1/)/S/sup 1/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, P.; Ramond, P.
Motivated by the work of Bowick and Rajeev, we calculate the curvature of the infinite-dimensional flag manifolds DiffS/sup 1//S/sup 1/ and Super DiffS/sup 1//S/sup 1/ using standard finite-dimensional coset space techniques. We regularize the infinity by zeta-function regularization and recover the conformal and superconformal anomalies respectively for a specific choice of the torsion.
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
NASA Astrophysics Data System (ADS)
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem
NASA Astrophysics Data System (ADS)
Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.
2017-05-01
In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-12
... for Four Decimal Point Pricing for Block and Exchange for Physical (``EFPs'') Trades August 8, 2011... block trades and the futures component of EFP trades to be traded/priced in four decimals points. Regular trades (non-block or non EFP) will continue to trade in only two decimal points. The text of the...
"PowerPoint[R] Engagement" Techniques to Foster Deep Learning
ERIC Educational Resources Information Center
Berk, Ronald A.
2011-01-01
The purpose of this article is to describe a bunch of strategies with which teachers may already be familiar and, perhaps, use regularly, but not always in the context of a formal PowerPoint[R] presentation. Here are the author's top 10 engagement techniques that fit neatly within any version of PowerPoint[R]. Some of these may also be used with…
Extremal Correlators in the Ads/cft Correspondence
NASA Astrophysics Data System (ADS)
D'Hoker, Eric; Freedman, Daniel Z.; Mathur, Samir D.; Matusis, Alec; Rastelli, Leonardo
The non-renormalization of the 3-point functions
SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space
Lustig, Michael; Pauly, John M.
2010-01-01
A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790
On Volterra quadratic stochastic operators with continual state space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganikhodjaev, Nasir; Hamzah, Nur Zatul Akmar
2015-05-15
Let (X,F) be a measurable space, and S(X,F) be the set of all probability measures on (X,F) where X is a state space and F is σ - algebraon X. We consider a nonlinear transformation (quadratic stochastic operator) defined by (Vλ)(A) = ∫{sub X}∫{sub X}P(x,y,A)dλ(x)dλ(y), where P(x, y, A) is regarded as a function of two variables x and y with fixed A ∈ F . A quadratic stochastic operator V is called a regular, if for any initial measure the strong limit lim{sub n→∞} V{sup n }(λ) is exists. In this paper, we construct a family of quadratic stochastic operators defined on themore » segment X = [0,1] with Borel σ - algebra F on X , prove their regularity and show that the limit measure is a Dirac measure.« less
Benitez, P; Losada, J C; Benito, R M; Borondo, F
2015-10-01
A study of the dynamical characteristics of the phase space corresponding to the vibrations of the LiNC-LiCN molecule using an analysis based on the small alignment index (SALI) is presented. SALI is a good indicator of chaos that can easily determine whether a given trajectory is regular or chaotic regardless of the dimensionality of the system, and can also provide a wealth of dynamical information when conveniently implemented. In two-dimensional (2D) systems SALI maps are computed as 2D phase space representations, where the SALI asymptotic values are represented in color scale. We show here how these maps provide full information on the dynamical phase space structure of the LiNC-LiCN system, even quantifying numerically the volume of the different zones of chaos and regularity as a function of the molecule excitation energy.
Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines
Julio L. Guardado; William T. Sommers
1977-01-01
The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...
Big Geo Data Services: From More Bytes to More Barrels
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Baumann, Peter
2016-04-01
The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.
Predictions of the Space Environment Services Center
NASA Technical Reports Server (NTRS)
Heckman, G. R.
1979-01-01
The types of users of the Space Environment Services Center are identified. All the data collected by the Center are listed and a short description of each primary index or activity summary is given. Each type of regularly produced forecast is described, along with the methods used to produce each prediction.
DOT National Transportation Integrated Search
2008-11-01
Although the current crew rest and duty restrictions for commercial space transportation remain in place, the Federal Aviation Administration (FAA) continues to review the regulation on a regular basis for validity and efficacy based on input from sc...
Hanson, Erik A; Lundervold, Arvid
2013-11-01
Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Implicit Contractive Mappings in Modular Metric and Fuzzy Metric Spaces
Hussain, N.; Salimi, P.
2014-01-01
The notion of modular metric spaces being a natural generalization of classical modulars over linear spaces like Lebesgue, Orlicz, Musielak-Orlicz, Lorentz, Orlicz-Lorentz, and Calderon-Lozanovskii spaces was recently introduced. In this paper we investigate the existence of fixed points of generalized α-admissible modular contractive mappings in modular metric spaces. As applications, we derive some new fixed point theorems in partially ordered modular metric spaces, Suzuki type fixed point theorems in modular metric spaces and new fixed point theorems for integral contractions. In last section, we develop an important relation between fuzzy metric and modular metric and deduce certain new fixed point results in triangular fuzzy metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results. PMID:25003157
NASA's Space Launch Initiative Targets Toxic Propellants
NASA Technical Reports Server (NTRS)
Hurlbert, Eric; McNeal, Curtis; Davis, Daniel J. (Technical Monitor)
2001-01-01
When manned and unmanned space flight first began, the clear and overriding design consideration was performance. Consequently, propellant combinations of all kinds were considered, tested, and, when they lifted the payload a kilometer higher, or an extra kilogram to the same altitude, they became part of our operational inventory. Cost was not considered. And with virtually all of the early work being performed by the military, safety was hardly a consideration. After all, fighting wars has always been dangerous. Those days are past now. With space flight, and the products of space flight, a regular part of our lives today, safety and cost are being reexamined. NASA's focus turns naturally to its Shuttle Space Transportation System. Designed, built, and flown for the first time in the 1970s, this system remains today America's workhorse for manned space flight. Without its tremendous lift capability and mission flexibility, the International Space Station would not exist. And the Hubble telescope would be a monument to shortsighted management, rather than the clear penetrating eye on the stars it is today. But the Shuttle system fully represents the design philosophy of its period: it is too costly to operate, and not safe enough for regular long term access to space. And one of the key reasons is the utilization of toxic propellants. This paper will present an overview of the utilization of toxic propellants on the current Shuttle system.
Suhrawardi's Epistemological Point of View and Its Educational Outcomes
ERIC Educational Resources Information Center
Nowrozi, Reza Ali; Ardakani, Seyed Hassan Hashemi; Shiri, Ali Shiravani
2012-01-01
This study investigates Suhrawardi's epistemological and philosophical point of view in order to analyze and elicit its educational outcomes. His philosophy, which can be called eclectic philosophy (involving intellect and intuition), regularly proposes a different philosophical system with intuitionist outlook. It is the combination of two…
Suzuki, Satoshi N; Kachi, Naoki; Suzuki, Jun-Ichirou
2008-09-01
During the development of an even-aged plant population, the spatial distribution of individuals often changes from a clumped pattern to a random or regular one. The development of local size hierarchies in an Abies forest was analysed for a period of 47 years following a large disturbance in 1959. In 1980 all trees in an 8 x 8 m plot were mapped and their height growth after the disturbance was estimated. Their mortality and growth were then recorded at 1- to 4-year intervals between 1980 and 2006. Spatial distribution patterns of trees were analysed by the pair correlation function. Spatial correlations between tree heights were analysed with a spatial autocorrelation function and the mark correlation function. The mark correlation function was able to detect a local size hierarchy that could not be detected by the spatial autocorrelation function alone. The small-scale spatial distribution pattern of trees changed from clumped to slightly regular during the 47 years. Mortality occurred in a density-dependent manner, which resulted in regular spacing between trees after 1980. The spatial autocorrelation and mark correlation functions revealed the existence of tree patches consisting of large trees at the initial stage. Development of a local size hierarchy was detected within the first decade after the disturbance, although the spatial autocorrelation was not negative. Local size hierarchies that developed persisted until 2006, and the spatial autocorrelation became negative at later stages (after about 40 years). This is the first study to detect local size hierarchies as a prelude to regular spacing using the mark correlation function. The results confirm that use of the mark correlation function together with the spatial autocorrelation function is an effective tool to analyse the development of a local size hierarchy of trees in a forest.
Zeta Function Regularization in Casimir Effect Calculations and J. S. DOWKER's Contribution
NASA Astrophysics Data System (ADS)
Elizalde, Emilio
2012-06-01
A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so-called operator regularization procedure are presented.
Zeta Function Regularization in Casimir Effect Calculations and J. S. Dowker's Contribution
NASA Astrophysics Data System (ADS)
Elizalde, Emilio
2012-07-01
A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so called operator regularization procedure are presented.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, Geoffrey C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
Quantum field theory in spaces with closed timelike curves
NASA Astrophysics Data System (ADS)
Boulware, David G.
1992-11-01
Gott spacetime has closed timelike curves, but no locally anomalous stress energy. A complete orthonormal set of eigenfunctions of the wave operator is found in the special case of a spacetime in which the total deficit angle is 2π. A scalar quantum field theory is constructed using these eigenfunctions. The resultant interacting quantum field theory is not unitary because the field operators can create real, on-shell, particles in the noncausal region. These particles propagate for finite proper time accumulating an arbitrary phase before being annihilated at the same spacetime point as that at which they were created. As a result, the effective potential within the noncausal region is complex, and probability is not conserved. The stress tensor of the scalar field is evaluated in the neighborhood of the Cauchy horizon; in the case of a sufficiently small Compton wavelength of the field, the stress tensor is regular and cannot prevent the formation of the Cauchy horizon.
Wavelength selection in the crown splash
NASA Astrophysics Data System (ADS)
Zhang, Li V.; Brunet, Philippe; Eggers, Jens; Deegan, Robert D.
2010-12-01
The impact of a drop onto a liquid layer produces a splash that results from the ejection and dissolution of one or more liquid sheets, which expand radially from the point of impact. In the crown splash parameter regime, secondary droplets appear at fairly regularly spaced intervals along the rim of the sheet. By performing many experiments for the same parameter values, we measure the spectrum of small-amplitude perturbations growing on the rim. We show that for a range of parameters in the crown splash regime, the generation of secondary droplets results from a Rayleigh-Plateau instability of the rim, whose shape is almost cylindrical. In our theoretical calculation, we include the time dependence of the base state. The remaining irregularity of the pattern is explained by the finite width of the Rayleigh-Plateau dispersion relation. Alternative mechanisms, such as the Rayleigh-Taylor instability, can be excluded for the experimental parameters of our study.
A Thermodynamically Consistent Damage Model for Advanced Composites
NASA Technical Reports Server (NTRS)
Maimi, Pere; Camanho, Pedro P.; Mayugo, Joan-Andreu; Davila, Carlos G.
2006-01-01
A continuum damage model for the prediction of damage onset and structural collapse of structures manufactured in fiber-reinforced plastic laminates is proposed. The principal damage mechanisms occurring in the longitudinal and transverse directions of a ply are represented by a damage tensor that is fixed in space. Crack closure under load reversal effects are taken into account using damage variables established as a function of the sign of the components of the stress tensor. Damage activation functions based on the LaRC04 failure criteria are used to predict the different damage mechanisms occurring at the ply level. The constitutive damage model is implemented in a finite element code. The objectivity of the numerical model is assured by regularizing the dissipated energy at a material point using Bazant's Crack Band Model. To verify the accuracy of the approach, analyses of coupon specimens were performed, and the numerical predictions were compared with experimental data.
L(2) stability for weak solutions of the Navier-Stokes equations in R(3)
NASA Astrophysics Data System (ADS)
Secchi, P.
1985-11-01
We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations is still an open problem. Up to now, the only available global existence theorem (other than for sufficiently small initial data) is that of weak (turbulent) solutions. From both the mathematical and the physical point of view, an interesting property is the stability of such weak solutions. We assume that v(t,x) is a solution, with initial datum vO(x). We suppose that the initial datum is perturbed and consider one weak solution u corresponding to the new initial velocity. Then we prove that, due to viscosity, the perturbed weak solution u approaches in a suitable norm the unperturbed one, as time goes to + infinity, without smallness assumptions on the initial perturbation.
Low rank factorization of the Coulomb integrals for periodic coupled cluster theory.
Hummel, Felix; Tsatsoulis, Theodoros; Grüneis, Andreas
2017-03-28
We study a tensor hypercontraction decomposition of the Coulomb integrals of periodic systems where the integrals are factorized into a contraction of six matrices of which only two are distinct. We find that the Coulomb integrals can be well approximated in this form already with small matrices compared to the number of real space grid points. The cost of computing the matrices scales as O(N 4 ) using a regularized form of the alternating least squares algorithm. The studied factorization of the Coulomb integrals can be exploited to reduce the scaling of the computational cost of expensive tensor contractions appearing in the amplitude equations of coupled cluster methods with respect to system size. We apply the developed methodologies to calculate the adsorption energy of a single water molecule on a hexagonal boron nitride monolayer in a plane wave basis set and periodic boundary conditions.
NASA Astrophysics Data System (ADS)
Camacho, A. G.; Fernández, J.; Cannavò, F.
2018-02-01
We present a software package to carry out inversions of surface deformation data (any combination of InSAR, GPS, and terrestrial data, e.g., EDM, levelling) as produced by 3D free-geometry extended bodies with anomalous pressure changes. The anomalous structures are described as an aggregation of elementary cells (whose effects are estimated as coming from point sources) in an elastic half space. The linear inverse problem (considering some simple regularization conditions) is solved by means of an exploratory approach. This software represents the open implementation of a previously published methodology (Camacho et al., 2011). It can be freely used with large data sets (e.g. InSAR data sets) or with data coming from small control networks (e.g. GPS monitoring data), mainly in volcanic areas, to estimate the expected pressure bodies representing magmatic intrusions. Here, the software is applied to some real test cases.
Regularization of instabilities in gravity theories
NASA Astrophysics Data System (ADS)
Ramazanoǧlu, Fethi M.
2018-01-01
We investigate instabilities and their regularization in theories of gravitation. Instabilities can be beneficial since their growth often leads to prominent observable signatures, which makes them especially relevant to relatively low signal-to-noise ratio measurements such as gravitational wave detections. An indefinitely growing instability usually renders a theory unphysical; hence, a desirable instability should also come with underlying physical machinery that stops the growth at finite values, i.e., regularization mechanisms. The prototypical gravity theory that presents such an instability is the spontaneous scalarization phenomena of scalar-tensor theories, which feature a tachyonic instability. We identify the regularization mechanisms in this theory and show that they can be utilized to regularize other instabilities as well. Namely, we present theories in which spontaneous growth is triggered by a ghost rather than a tachyon and numerically calculate stationary solutions of scalarized neutron stars in these theories. We speculate on the possibility of regularizing known divergent instabilities in certain gravity theories using our findings and discuss alternative theories of gravitation in which regularized instabilities may be present. Even though we study many specific examples, our main point is the recognition of regularized instabilities as a common theme and unifying mechanism in a vast array of gravity theories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lafata, K; Ren, L; Wu, Q
Purpose: To develop a data-mining methodology based on quantum clustering and machine learning to predict expected dosimetric endpoints for lung SBRT applications based on patient-specific anatomic features. Methods: Ninety-three patients who received lung SBRT at our clinic from 2011–2013 were retrospectively identified. Planning information was acquired for each patient, from which various features were extracted using in-house semi-automatic software. Anatomic features included tumor-to-OAR distances, tumor location, total-lung-volume, GTV and ITV. Dosimetric endpoints were adopted from RTOG-0195 recommendations, and consisted of various OAR-specific partial-volume doses and maximum point-doses. First, PCA analysis and unsupervised quantum-clustering was used to explore the feature-space tomore » identify potentially strong classifiers. Secondly, a multi-class logistic regression algorithm was developed and trained to predict dose-volume endpoints based on patient-specific anatomic features. Classes were defined by discretizing the dose-volume data, and the feature-space was zero-mean normalized. Fitting parameters were determined by minimizing a regularized cost function, and optimization was performed via gradient descent. As a pilot study, the model was tested on two esophageal dosimetric planning endpoints (maximum point-dose, dose-to-5cc), and its generalizability was evaluated with leave-one-out cross-validation. Results: Quantum-Clustering demonstrated a strong separation of feature-space at 15Gy across the first-and-second Principle Components of the data when the dosimetric endpoints were retrospectively identified. Maximum point dose prediction to the esophagus demonstrated a cross-validation accuracy of 87%, and the maximum dose to 5cc demonstrated a respective value of 79%. The largest optimized weighting factor was placed on GTV-to-esophagus distance (a factor of 10 greater than the second largest weighting factor), indicating an intuitively strong correlation between this feature and both endpoints. Conclusion: This pilot study shows that it is feasible to predict dose-volume endpoints based on patient-specific anatomic features. The developed methodology can potentially help to identify patients at risk for higher OAR doses, thus improving the efficiency of treatment planning. R01-184173.« less
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Strong liquid-crystalline polymeric compositions
Dowell, F.
1993-12-07
Strong liquid-crystalline polymeric (LCP) compositions of matter are described. LCP backbones are combined with liquid crystalline (LC) side chains in a manner which maximizes molecular ordering through interdigitation of the side chains, thereby yielding materials which are predicted to have superior mechanical properties over existing LCPs. The theoretical design of LCPs having such characteristics includes consideration of the spacing distance between side chains along the backbone, the need for rigid sections in the backbone and in the side chains, the degree of polymerization, the length of the side chains, the regularity of the spacing of the side chains along the backbone, the interdigitation of side chains in sub-molecular strips, the packing of the side chains on one or two sides of the backbone to which they are attached, the symmetry of the side chains, the points of attachment of the side chains to the backbone, the flexibility and size of the chemical group connecting each side chain to the backbone, the effect of semiflexible sections in the backbone and the side chains, and the choice of types of dipolar and/or hydrogen bonding forces in the backbones and the side chains for easy alignment. 27 figures.
Test Report: Low-Cost Access to TDRS Using TOPEX to Emulate Small Satellite Performance
NASA Technical Reports Server (NTRS)
Horan, Stephen
1997-01-01
This report lists the objectives and conclusions of a series of experimental contacts between the TOPEX and the TDRS satellites. These experiments are designed to verify the theoretical prediction that a spin-stabilized satellite with a broad-beam, zenith-pointing antenna can have regular, significant contacts with the TDRS and use those contacts for data services. This series of experiments is a joint project between the experimenters at New Mexico State University (NMSU), the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), and the Jet Propulsion Laboratory (JPL). In these experiments, we show that: (1) The satellite contacts during the experiment begin and end as predicted prior to the experiment; (2) The data contact is held for the desired contact duration; (3) The data quality through the contact is high and similar to that required by actual project needs; and (4) The receiving hardware at the White Sands Complex (WSC) is able to track the signals better than expected by analysis of the antenna pattern effects alone predict. We believe that these experiments successfully demonstrate the basic concept and its validity with actual spacecraft systems.
Experimental evidence for non-Abelian gauge potentials in twisted graphene bilayers
NASA Astrophysics Data System (ADS)
Yin, Long-Jing; Qiao, Jia-Bin; Zuo, Wei-Jie; Li, Wen-Tian; He, Lin
2015-08-01
Non-Abelian gauge potentials are quite relevant in subatomic physics, but they are relatively rare in a condensed matter context. Here we report the experimental evidence for non-Abelian gauge potentials in twisted graphene bilayers by scanning tunneling microscopy and spectroscopy. At a magic twisted angle, θ ≈(1.11±0.05 ) ∘ , a pronounced sharp peak, which arises from the nondispersive flat bands at the charge neutrality point, is observed in the tunneling density of states due to the action of the non-Abelian gauge fields. Moreover, we observe confined electronic states in the twisted bilayer, as manifested by regularly spaced tunneling peaks with energy spacing δ E ≈vF/D ≈70 meV (here vF is the Fermi velocity of graphene and D is the period of the moiré patterns). This indicates that the non-Abelian gauge potentials in twisted graphene bilayers confine low-energy electrons into a triangular array of quantum dots following the modulation of the moiré patterns. Our results also directly demonstrate that the Fermi velocity in twisted bilayers can be tuned from about 106m /s to zero by simply reducing the twisted angle of about 2∘.
Simulation of water vapor condensation on LOX droplet surface using liquid nitrogen
NASA Technical Reports Server (NTRS)
Powell, Eugene A.
1988-01-01
The formation of ice or water layers on liquid oxygen (LOX) droplets in the Space Shuttle Main Engine (SSME) environment was investigated. Formulation of such ice/water layers is indicated by phase-equilibrium considerations under conditions of high partial pressure of water vapor (steam) and low LOX droplet temperature prevailing in the SSME preburner or main chamber. An experimental investigation was begun using liquid nitrogen as a LOX simulant. A monodisperse liquid nitrogen droplet generator was developed which uses an acoustic driver to force the stream of liquid emerging from a capillary tube to break up into a stream of regularly space uniformly sized spherical droplets. The atmospheric pressure liquid nitrogen in the droplet generator reservoir was cooled below its boiling point to prevent two phase flow from occurring in the capillary tube. An existing steam chamber was modified for injection of liquid nitrogen droplets into atmospheric pressure superheated steam. The droplets were imaged using a stroboscopic video system and a laser shadowgraphy system. Several tests were conducted in which liquid nitrogen droplets were injected into the steam chamber. Under conditions of periodic droplet formation, images of 600 micron diameter liquid nitrogen droplets were obtained with the stroboscopic video systems.
Regularity results for the minimum time function with Hörmander vector fields
NASA Astrophysics Data System (ADS)
Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa
2018-03-01
In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.
Local regularity for time-dependent tug-of-war games with varying probabilities
NASA Astrophysics Data System (ADS)
Parviainen, Mikko; Ruosteenoja, Eero
2016-07-01
We study local regularity properties of value functions of time-dependent tug-of-war games. For games with constant probabilities we get local Lipschitz continuity. For more general games with probabilities depending on space and time we obtain Hölder and Harnack estimates. The games have a connection to the normalized p (x , t)-parabolic equation ut = Δu + (p (x , t) - 2) Δ∞N u.
Invariant functionals in higher-spin theory
NASA Astrophysics Data System (ADS)
Vasiliev, M. A.
2017-03-01
A new construction for gauge invariant functionals in the nonlinear higher-spin theory is proposed. Being supported by differential forms closed by virtue of the higher-spin equations, invariant functionals are associated with central elements of the higher-spin algebra. In the on-shell AdS4 higher-spin theory we identify a four-form conjectured to represent the generating functional for 3d boundary correlators and a two-form argued to support charges for black hole solutions. Two actions for 3d boundary conformal higher-spin theory are associated with the two parity-invariant higher-spin models in AdS4. The peculiarity of the spinorial formulation of the on-shell AdS3 higher-spin theory, where the invariant functional is supported by a two-form, is conjectured to be related to the holomorphic factorization at the boundary. The nonlinear part of the star-product function F* (B (x)) in the higher-spin equations is argued to lead to divergencies in the boundary limit representing singularities at coinciding boundary space-time points of the factors of B (x), which can be regularized by the point splitting. An interpretation of the RG flow in terms of proposed construction is briefly discussed.
Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran
2012-07-01
1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
NASA Technical Reports Server (NTRS)
2008-01-01
This is a 3D representation of the pits seen in the first Atomic Force Microscope, or AFM, images sent back from NASA's Phoenix Mars Lander. Red represents the highest point and purple represents the lowest point. The particle in the upper left corner shown at the highest magnification ever seen from another world is a rounded particle about one micrometer, or one millionth of a meter, across. It is a particle of the dust that cloaks Mars. Such dust particles color the Martian sky pink, feed storms that regularly envelop the planet and produce Mars' distinctive red soil. The particle was part of a sample informally called 'Sorceress' delivered to the AFM on the 38th Martian day, or sol, of the mission (July 2, 2008). The AFM is part of Phoenix's microscopic station called MECA, or the Microscopy, Electrochemistry, and Conductivity Analyzer. The AFM was developed by a Swiss-led consortium, with Imperial College London producing the silicon substrate that holds sampled particles. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NASA Astrophysics Data System (ADS)
Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy
2017-07-01
As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data.
Neutron star Interior Composition Explorer (NICER)
2017-12-08
NICER Optics Lead Takashi Okajima makes a fine adjustment to the orientation of one X-ray “concentrator” optic. The 56 optics must point in the same direction in order for NICER to achieve its science goals. The payload’s 56 mirror assemblies concentrate X-rays onto silicon detectors to gather data that will probe the interior makeup of neutron stars, including those that appear to flash regularly, called pulsars. The Neutron star Interior Composition Explorer (NICER) is a NASA Explorer Mission of Opportunity dedicated to studying the extraordinary environments — strong gravity, ultra-dense matter, and the most powerful magnetic fields in the universe — embodied by neutron stars. An attached payload aboard the International Space Station, NICER will deploy an instrument with unique capabilities for timing and spectroscopy of fast X-ray brightness fluctuations. The embedded Station Explorer for X-ray Timing and Navigation Technology demonstration (SEXTANT) will use NICER data to validate, for the first time in space, technology that exploits pulsars as natural navigation beacons. Credit: NASA/Goddard/ Keith Gendreau NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Research in Stochastic Processes
1988-08-31
stationary sequence, Stochastic Proc. Appl. 29, 1988, 155-169 T. Hsing, J. Husler and M.R. Leadbetter, On the exceedance point process for a stationary...Nandagopalan, On exceedance point processes for "regular" sample functions, Proc. Volume, Oberxolfach Conf. on Extreme Value Theory, J. Husler and R. Reiss...exceedance point processes for stationary sequences under mild oscillation restrictions, Apr. 88. Obermotfach Conf. on Extremal Value Theory. Ed. J. HUsler
Mach stem formation in outdoor measurements of acoustic shocks.
Leete, Kevin M; Gee, Kent L; Neilsen, Tracianne B; Truscott, Tadd T
2015-12-01
Mach stem formation during outdoor acoustic shock propagation is investigated using spherical oxyacetylene balloons exploded above pavement. The location of the transition point from regular to irregular reflection and the path of the triple point are experimentally resolved using microphone arrays and a high-speed camera. The transition point falls between recent analytical work for weak irregular reflections and an empirical relationship derived from large explosions.
Clusters in irregular areas and lattices.
Wieczorek, William F; Delmerico, Alan M; Rogerson, Peter A; Wong, David W S
2012-01-01
Geographic areas of different sizes and shapes of polygons that represent counts or rate data are often encountered in social, economic, health, and other information. Often political or census boundaries are used to define these areas because the information is available only for those geographies. Therefore, these types of boundaries are frequently used to define neighborhoods in spatial analyses using geographic information systems and related approaches such as multilevel models. When point data can be geocoded, it is possible to examine the impact of polygon shape on spatial statistical properties, such as clustering. We utilized point data (alcohol outlets) to examine the issue of polygon shape and size on visualization and statistical properties. The point data were allocated to regular lattices (hexagons and squares) and census areas for zip-code tabulation areas and tracts. The number of units in the lattices was set to be similar to the number of tract and zip-code areas. A spatial clustering statistic and visualization were used to assess the impact of polygon shape for zip- and tract-sized units. Results showed substantial similarities and notable differences across shape and size. The specific circumstances of a spatial analysis that aggregates points to polygons will determine the size and shape of the areal units to be used. The irregular polygons of census units may reflect underlying characteristics that could be missed by large regular lattices. Future research to examine the potential for using a combination of irregular polygons and regular lattices would be useful.
Critical spaces for quasilinear parabolic evolution equations and applications
NASA Astrophysics Data System (ADS)
Prüss, Jan; Simonett, Gieri; Wilke, Mathias
2018-02-01
We present a comprehensive theory of critical spaces for the broad class of quasilinear parabolic evolution equations. The approach is based on maximal Lp-regularity in time-weighted function spaces. It is shown that our notion of critical spaces coincides with the concept of scaling invariant spaces in case that the underlying partial differential equation enjoys a scaling invariance. Applications to the vorticity equations for the Navier-Stokes problem, convection-diffusion equations, the Nernst-Planck-Poisson equations in electro-chemistry, chemotaxis equations, the MHD equations, and some other well-known parabolic equations are given.
Solovjev, V A
1987-09-01
Today, more than 20 years after the first in the world man's space walk, soviet cosmonautics gained large experience of extravehicular activity (EVA). Space suits of high reliability, onboard facilities for passing through the airlock, sets of special tools and technological rigging, as well as procedures for carrying out various EVA's were developed. In the course of the Salyut-7 space station orbital operation the EVA's have become regular. The author of the report as the participant of the EVA's considers the main steps of man activities in space and analyzes specific problems arised in performing such activities.
Convergence of quantum electrodynamics in a curved modification of Minkowski space.
Segal, I E; Zhou, Z
1994-01-01
The interaction and total hamiltonians for quantum electrodynamics, in the interaction representation, are entirely regular self-adjoint operators in Hilbert space, in the universal covering manifold M of the conformal compactification of Minkowski space Mo. (M is conformally equivalent to the Einstein universe E, in which Mo may be canonically imbedded.) In a fixed Lorentz frame this may be expressed as convergence in a spherical space with suitable periodic boundary conditions in time. The traditional relativistic theory is the formal limit of the present variant as the space curvature vanishes. PMID:11607455
High Order Numerical Simulation of Waves Using Regular Grids and Non-conforming Interfaces
2013-10-06
SECURITY CLASSIFICATION OF: We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material...of space with smooth, but not necessarily constant, material characteristics, separated into sub-domains by interfaces of arbitrary shape. We...Abstract We study the propagation of waves over large regions of space with smooth, but not necessarily constant, material characteristics, separated into
On the theory of drainage area for regular and non-regular points.
Bonetti, S; Bragg, A D; Porporato, A
2018-03-01
The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47 , W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219 , 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.
On the theory of drainage area for regular and non-regular points
NASA Astrophysics Data System (ADS)
Bonetti, S.; Bragg, A. D.; Porporato, A.
2018-03-01
The drainage area is an important, non-local property of a landscape, which controls surface and subsurface hydrological fluxes. Its role in numerous ecohydrological and geomorphological applications has given rise to several numerical methods for its computation. However, its theoretical analysis has lagged behind. Only recently, an analytical definition for the specific catchment area was proposed (Gallant & Hutchinson. 2011 Water Resour. Res. 47, W05535. (doi:10.1029/2009WR008540)), with the derivation of a differential equation whose validity is limited to regular points of the watershed. Here, we show that such a differential equation can be derived from a continuity equation (Chen et al. 2014 Geomorphology 219, 68-86. (doi:10.1016/j.geomorph.2014.04.037)) and extend the theory to critical and singular points both by applying Gauss's theorem and by means of a dynamical systems approach to define basins of attraction of local surface minima. Simple analytical examples as well as applications to more complex topographic surfaces are examined. The theoretical description of topographic features and properties, such as the drainage area, channel lines and watershed divides, can be broadly adopted to develop and test the numerical algorithms currently used in digital terrain analysis for the computation of the drainage area, as well as for the theoretical analysis of landscape evolution and stability.
Jones, Malia; Pebley, Anne R.
2014-01-01
Research on neighborhood effects has focused largely on residential neighborhoods, but people are exposed to many other places in the course of their daily lives—at school, at work, when shopping, and so on. Thus, studies of residential neighborhoods consider only a subset of the social-spatial environment affecting individuals. In this article, we examine the characteristics of adults’ “activity spaces”—spaces defined by locations that individuals visit regularly, in Los Angeles County, California. Using geographic information system (GIS) methods, we define activity spaces in two ways and estimate their socioeconomic characteristics. Our research has two goals. First, we determine whether residential neighborhoods represent the social conditions to which adults are exposed in the course of their regular activities. Second, we evaluate whether particular groups are exposed to a broader or narrower range of social contexts in the course of their daily activities. We find that activity spaces are substantially more heterogeneous in terms of key social characteristics, compared to residential neighborhoods. However, the characteristics of both home neighborhoods and activity spaces are closely associated with individual characteristics. Our results suggest that most people experience substantial segregation across the range of spaces in their daily lives, not just at home. PMID:24719273
Dual keel Space Station payload pointing system design and analysis feasibility study
NASA Technical Reports Server (NTRS)
Smagala, Tom; Class, Brian F.; Bauer, Frank H.; Lebair, Deborah A.
1988-01-01
A Space Station attached Payload Pointing System (PPS) has been designed and analyzed. The PPS is responsible for maintaining fixed payload pointing in the presence of disturbance applied to the Space Station. The payload considered in this analysis is the Solar Optical Telescope. System performance is evaluated via digital time simulations by applying various disturbance forces to the Space Station. The PPS meets the Space Station articulated pointing requirement for all disturbances except Shuttle docking and some centrifuge cases.
Olivas uses a laser ranging device on STS-117 Space Shuttle Atlantis
2007-06-10
S117-E-06953 (10 June 2007) --- Astronaut John "Danny" Olivas, STS-117 mission specialist, aims a laser range finder through one of the overhead windows on the aft flight deck of the Space Shuttle Atlantis at it approaches the International Space Station. This instrument is a regularly called-on tool during rendezvous operations with the station. The subsequent docking will allow the STS-117 astronauts and the Expedition 15 crew to team up for several days of key tasks in space.
On the existence of touch points for first-order state inequality constraints
NASA Technical Reports Server (NTRS)
Seywald, Hans; Cliff, Eugene M.
1993-01-01
The appearance of touch points in state constrained optimal control problems with general vector-valued control is studied. Under the assumption that the Hamiltonian is regular, touch points for first-order state inequalities are shown to exist only under very special conditions. In many cases of practical importance these conditions can be used to exclude touch points a priori without solving an optimal control problem. The results are demonstrated on a simple example.
ERIC Educational Resources Information Center
Taylor, James A.; Farace, Richard V.
This paper argues that people who interact regularly and repetitively among themselves create a conjoint information space wherein common values, attitudes, and beliefs arise through the process of information transmission among the members in the space. Three major hypotheses concerning informal communication groups in organizations were tested…
Spin squeezing as an indicator of quantum chaos in the Dicke model.
Song, Lijun; Yan, Dong; Ma, Jian; Wang, Xiaoguang
2009-04-01
We study spin squeezing, an intrinsic quantum property, in the Dicke model without the rotating-wave approximation. We show that the spin squeezing can reveal the underlying chaotic and regular structures in phase space given by a Poincaré section, namely, it acts as an indicator of quantum chaos. Spin squeezing vanishes after a very short time for an initial coherent state centered in a chaotic region, whereas it persists over a longer time for the coherent state centered in a regular region of the phase space. We also study the distribution of the mean spin directions when quantum dynamics takes place. Finally, we discuss relations among spin squeezing, bosonic quadrature squeezing, and two-qubit entanglement in the dynamical processes.
Ionospheric-thermospheric UV tomography: 1. Image space reconstruction algorithms
NASA Astrophysics Data System (ADS)
Dymond, K. F.; Budzien, S. A.; Hei, M. A.
2017-03-01
We present and discuss two algorithms of the class known as Image Space Reconstruction Algorithms (ISRAs) that we are applying to the solution of large-scale ionospheric tomography problems. ISRAs have several desirable features that make them useful for ionospheric tomography. In addition to producing nonnegative solutions, ISRAs are amenable to sparse-matrix formulations and are fast, stable, and robust. We present the results of our studies of two types of ISRA: the Least Squares Positive Definite and the Richardson-Lucy algorithms. We compare their performance to the Multiplicative Algebraic Reconstruction and Conjugate Gradient Least Squares algorithms. We then discuss the use of regularization in these algorithms and present our new approach based on regularization to a partial differential equation.
Zhu, Chengcheng; Tian, Bing; Chen, Luguang; Eisenmenger, Laura; Raithel, Esther; Forman, Christoph; Ahn, Sinyeob; Laub, Gerhard; Liu, Qi; Lu, Jianping; Liu, Jing; Hess, Christopher; Saloner, David
2018-06-01
Develop and optimize an accelerated, high-resolution (0.5 mm isotropic) 3D black blood MRI technique to reduce scan time for whole-brain intracranial vessel wall imaging. A 3D accelerated T 1 -weighted fast-spin-echo prototype sequence using compressed sensing (CS-SPACE) was developed at 3T. Both the acquisition [echo train length (ETL), under-sampling factor] and reconstruction parameters (regularization parameter, number of iterations) were first optimized in 5 healthy volunteers. Ten patients with a variety of intracranial vascular disease presentations (aneurysm, atherosclerosis, dissection, vasculitis) were imaged with SPACE and optimized CS-SPACE, pre and post Gd contrast. Lumen/wall area, wall-to-lumen contrast ratio (CR), enhancement ratio (ER), sharpness, and qualitative scores (1-4) by two radiologists were recorded. The optimized CS-SPACE protocol has ETL 60, 20% k-space under-sampling, 0.002 regularization factor with 20 iterations. In patient studies, CS-SPACE and conventional SPACE had comparable image scores both pre- (3.35 ± 0.85 vs. 3.54 ± 0.65, p = 0.13) and post-contrast (3.72 ± 0.58 vs. 3.53 ± 0.57, p = 0.15), but the CS-SPACE acquisition was 37% faster (6:48 vs. 10:50). CS-SPACE agreed with SPACE for lumen/wall area, ER measurements and sharpness, but marginally reduced the CR. In the evaluation of intracranial vascular disease, CS-SPACE provides a substantial reduction in scan time compared to conventional T 1 -weighted SPACE while maintaining good image quality.
Mathematical Modeling the Geometric Regularity in Proteus Mirabilis Colonies
NASA Astrophysics Data System (ADS)
Zhang, Bin; Jiang, Yi; Minsu Kim Collaboration
Proteus Mirabilis colony exhibits striking spatiotemporal regularity, with concentric ring patterns with alternative high and low bacteria density in space, and periodicity for repetition process of growth and swarm in time. We present a simple mathematical model to explain the spatiotemporal regularity of P. Mirabilis colonies. We study a one-dimensional system. Using a reaction-diffusion model with thresholds in cell density and nutrient concentration, we recreated periodic growth and spread patterns, suggesting that the nutrient constraint and cell density regulation might be sufficient to explain the spatiotemporal periodicity in P. Mirabilis colonies. We further verify this result using a cell based model.
NASA Technical Reports Server (NTRS)
Weger, R. C.; Lee, J.; Zhu, Tianri; Welch, R. M.
1992-01-01
The current controversy existing in reference to the regularity vs. clustering in cloud fields is examined by means of analysis and simulation studies based upon nearest-neighbor cumulative distribution statistics. It is shown that the Poisson representation of random point processes is superior to pseudorandom-number-generated models and that pseudorandom-number-generated models bias the observed nearest-neighbor statistics towards regularity. Interpretation of this nearest-neighbor statistics is discussed for many cases of superpositions of clustering, randomness, and regularity. A detailed analysis is carried out of cumulus cloud field spatial distributions based upon Landsat, AVHRR, and Skylab data, showing that, when both large and small clouds are included in the cloud field distributions, the cloud field always has a strong clustering signal.
Sudden emergence of q-regular subgraphs in random graphs
NASA Astrophysics Data System (ADS)
Pretti, M.; Weigt, M.
2006-07-01
We investigate the computationally hard problem whether a random graph of finite average vertex degree has an extensively large q-regular subgraph, i.e., a subgraph with all vertices having degree equal to q. We reformulate this problem as a constraint-satisfaction problem, and solve it using the cavity method of statistical physics at zero temperature. For q = 3, we find that the first large q-regular subgraphs appear discontinuously at an average vertex degree c3 - reg simeq 3.3546 and contain immediately about 24% of all vertices in the graph. This transition is extremely close to (but different from) the well-known 3-core percolation point c3 - core simeq 3.3509. For q > 3, the q-regular subgraph percolation threshold is found to coincide with that of the q-core.
Kayano, Mitsunori; Matsui, Hidetoshi; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2016-04-01
High-throughput time course expression profiles have been available in the last decade due to developments in measurement techniques and devices. Functional data analysis, which treats smoothed curves instead of originally observed discrete data, is effective for the time course expression profiles in terms of dimension reduction, robustness, and applicability to data measured at small and irregularly spaced time points. However, the statistical method of differential analysis for time course expression profiles has not been well established. We propose a functional logistic model based on elastic net regularization (F-Logistic) in order to identify the genes with dynamic alterations in case/control study. We employ a mixed model as a smoothing method to obtain functional data; then F-Logistic is applied to time course profiles measured at small and irregularly spaced time points. We evaluate the performance of F-Logistic in comparison with another functional data approach, i.e. functional ANOVA test (F-ANOVA), by applying the methods to real and synthetic time course data sets. The real data sets consist of the time course gene expression profiles for long-term effects of recombinant interferon β on disease progression in multiple sclerosis. F-Logistic distinguishes dynamic alterations, which cannot be found by competitive approaches such as F-ANOVA, in case/control study based on time course expression profiles. F-Logistic is effective for time-dependent biomarker detection, diagnosis, and therapy. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning.
Peng, Yong; Lu, Bao-Liang; Wang, Suhang
2015-05-01
Constructing an informative and discriminative graph plays an important role in various pattern recognition tasks such as clustering and classification. Among the existing graph-based learning models, low-rank representation (LRR) is a very competitive one, which has been extensively employed in spectral clustering and semi-supervised learning (SSL). In SSL, the graph is composed of both labeled and unlabeled samples, where the edge weights are calculated based on the LRR coefficients. However, most of existing LRR related approaches fail to consider the geometrical structure of data, which has been shown beneficial for discriminative tasks. In this paper, we propose an enhanced LRR via sparse manifold adaption, termed manifold low-rank representation (MLRR), to learn low-rank data representation. MLRR can explicitly take the data local manifold structure into consideration, which can be identified by the geometric sparsity idea; specifically, the local tangent space of each data point was sought by solving a sparse representation objective. Therefore, the graph to depict the relationship of data points can be built once the manifold information is obtained. We incorporate a regularizer into LRR to make the learned coefficients preserve the geometric constraints revealed in the data space. As a result, MLRR combines both the global information emphasized by low-rank property and the local information emphasized by the identified manifold structure. Extensive experimental results on semi-supervised classification tasks demonstrate that MLRR is an excellent method in comparison with several state-of-the-art graph construction approaches. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hui, F.; Qing, Z.; Gengen, Q.; Fagen, P.; Dawei, B.; Baotun, G.; Jingqi, L.; Changwang, L.; Xiaochang, L.; Meixing, H.; Bingrui, D.
2012-12-01
Being constituted by the Seberia, Northern China fossil plate and Pacific Plate, the tectonics of Northeast China are very complicated. In order to study the electrical structure in these areas, the project SinoProbe-01-04 'Experimental study of 'standard monitoring network' of continental EM parameters in Northeast China' have established a 4°×4°regional MT array covering the whole Northeast China(Fig. 1). To make sure that MT data observed on each standard point representatively, a cross profile with the standard point being center and eight auxiliary measuring points around has been designed in practical work, and the same direction of the physical measuring point should have 20 km space, the observation time should be more than 120 hours in standard point and more than 24 hours in each auxiliary station. Both broadband MT equipment (V5-2000) and long-period MT equipment (LEMI-417M) have been used together in standard point, then the ultra-wideband electromagnetic signals at 320HZ-1/10000Hz can be acquired by combining the field data observed by each equipment. Eleven MT standard network control point with total 99 physical measuring points have been finished in 2010, then those works were repeated again in 2011 to make sure observed result reliable. Based on the observed result, this article preliminary analysis the electrical structure of each major tectonic element in Northeast China, which including the regularity of distribution of regional electrical spindle, the distribution characteristics of vertical conductivity, development status of the low resistivity layer in the crust, and the depth of the high conductivity layer in upper mantle. It has been founded that the electrical features of the major tectonic element in Northeast China are different and appear electrical-heterogeneous in cross direction. Fig.1 MT array observed site
Pituitary tumor-transforming gene 1 regulates the patterning of retinal mosaics
Keeley, Patrick W.; Zhou, Cuiqi; Lu, Lu; Williams, Robert W.; Melmed, Shlomo; Reese, Benjamin E.
2014-01-01
Neurons are commonly organized as regular arrays within a structure, and their patterning is achieved by minimizing the proximity between like-type cells, but molecular mechanisms regulating this process have, until recently, been unexplored. We performed a forward genetic screen using recombinant inbred (RI) strains derived from two parental A/J and C57BL/6J mouse strains to identify genomic loci controlling spacing of cholinergic amacrine cells, which is a subclass of retinal interneuron. We found conspicuous variation in mosaic regularity across these strains and mapped a sizeable proportion of that variation to a locus on chromosome 11 that was subsequently validated with a chromosome substitution strain. Using a bioinformatics approach to narrow the list of potential candidate genes, we identified pituitary tumor-transforming gene 1 (Pttg1) as the most promising. Expression of Pttg1 was significantly different between the two parental strains and correlated with mosaic regularity across the RI strains. We identified a seven-nucleotide deletion in the Pttg1 promoter in the C57BL/6J mouse strain and confirmed a direct role for this motif in modulating Pttg1 expression. Analysis of Pttg1 KO mice revealed a reduction in the mosaic regularity of cholinergic amacrine cells, as well as horizontal cells, but not in two other retinal cell types. Together, these results implicate Pttg1 in the regulation of homotypic spacing between specific types of retinal neurons. The genetic variant identified creates a binding motif for the transcriptional activator protein 1 complex, which may be instrumental in driving differential expression of downstream processes that participate in neuronal spacing. PMID:24927528
Semilocal momentum-space regularized chiral two-nucleon potentials up to fifth order
NASA Astrophysics Data System (ADS)
Reinert, P.; Krebs, H.; Epelbaum, E.
2018-05-01
We introduce new semilocal two-nucleon potentials up to fifth order in the chiral expansion. We employ a simple regularization approach for the pion exchange contributions which i) maintains the long-range part of the interaction, ii) is implemented in momentum space and iii) can be straightforwardly applied to regularize many-body forces and current operators. We discuss in detail the two-nucleon contact interactions at fourth order and demonstrate that three terms out of fifteen used in previous calculations can be eliminated via suitably chosen unitary transformations. The removal of the redundant contact terms results in a drastic simplification of the fits to scattering data and leads to interactions which are much softer ( i.e., more perturbative) than our recent semilocal coordinate-space regularized potentials. Using the pion-nucleon low-energy constants from matching pion-nucleon Roy-Steiner equations to chiral perturbation theory, we perform a comprehensive analysis of nucleon-nucleon scattering and the deuteron properties up to fifth chiral order and study the impact of the leading F-wave two-nucleon contact interactions which appear at sixth order. The resulting chiral potentials at fifth order lead to an outstanding description of the proton-proton and neutron-proton scattering data from the self-consistent Granada-2013 database below the pion production threshold, which is significantly better than for any other chiral potential. For the first time, the chiral potentials match in precision and even outperform the available high-precision phenomenological potentials, while the number of adjustable parameters is, at the same time, reduced by about ˜ 40%. Last but not least, we perform a detailed error analysis and, in particular, quantify for the first time the statistical uncertainties of the fourth- and the considered sixth-order contact interactions.
Kavak, Sermin Tukel; Bumin, Gonca
2009-01-01
The aim of this study was to investigate the effect of different ergonomic desk designs and pencil grip patterns on handwriting performance in children with hemiplegic cerebral palsy and healthy children. Twenty-six children with left hemiplegic cerebral palsy and 32 typically developing children were included. The Minnesota Handwriting Assessment was used to evaluate handwriting abilities. Pencil grip posture was assessed with a 5-point rating system. Specifically designed adjustable desks and chairs were used. Four different desk types were used in this study: 1) regular desk; 2) regular desk with a 20 degrees inclination; 3) cutout desk; and 4) cutout desk with a 20 degrees inclination. Statistically significant differences were found between both groups in terms of handwriting ability (p < 0.001). There was no significant difference regarding grip scores between children with cerebral palsy and healthy children (p > 0.05). We found that children with cerebral palsy had better performance using cutout desks in relation to rate and spacing parameters of handwriting (p < 0.05). The results of our study demonstrated that the pencil grip patterns have no effect on the handwriting parameters in both children with cerebral palsy and healthy children. It is recommended that a cutout table be used to provide more upper extremity support in handwriting activities for students with cerebral palsy.
UV-IR mixing in nonassociative Snyder ϕ4 theory
NASA Astrophysics Data System (ADS)
Meljanac, Stjepan; Mignemi, Salvatore; Trampetic, Josip; You, Jiangyang
2018-03-01
Using a quantization of the nonassociative and noncommutative Snyder ϕ4 scalar field theory in a Hermitian realization, we present in this article analytical formulas for the momentum-conserving part of the one-loop two-point function of this theory in D -, 4-, and 3-dimensional Euclidean spaces, which are exact with respect to the noncommutative deformation parameter β . We prove that these integrals are regularized by the Snyder deformation. These results indicate that the Snyder deformation does partially regularize the UV divergences of the undeformed theory, as it was proposed decades ago. Furthermore, it is observed that different nonassociative ϕ4 products can generate different momentum-conserving integrals. Finally, most importantly, a logarithmic infrared divergence emerges in one of these interaction terms. We then analyze sample momentum nonconserving integral qualitatively and show that it could exhibit IR divergence too. Therefore, infrared divergences should exist, in general, in the Snyder ϕ4 theory. We consider infrared divergences at the limit p →0 as UV/IR mixings induced by nonassociativity, since they are associated to the matching UV divergence in the zero-momentum limit and appear in specific types of nonassociative ϕ4 products. We also discuss the extrapolation of the Snyder deformation parameter β to negative values as well as certain general properties of one-loop quantum corrections in Snyder ϕ4 theory at the zero-momentum limit.
NASA Astrophysics Data System (ADS)
King, Sharon V.; Yuan, Shuai; Preza, Chrysanthe
2018-03-01
Effectiveness of extended depth of field microscopy (EDFM) implementation with wavefront encoding methods is reduced by depth-induced spherical aberration (SA) due to reliance of this approach on a defined point spread function (PSF). Evaluation of the engineered PSF's robustness to SA, when a specific phase mask design is used, is presented in terms of the final restored image quality. Synthetic intermediate images were generated using selected generalized cubic and cubic phase mask designs. Experimental intermediate images were acquired using the same phase mask designs projected from a liquid crystal spatial light modulator. Intermediate images were restored using the penalized space-invariant expectation maximization and the regularized linear least squares algorithms. In the presence of depth-induced SA, systems characterized by radially symmetric PSFs, coupled with model-based computational methods, achieve microscope imaging performance with fewer deviations in structural fidelity (e.g., artifacts) in simulation and experiment and 50% more accurate positioning of 1-μm beads at 10-μm depth in simulation than those with radially asymmetric PSFs. Despite a drop in the signal-to-noise ratio after processing, EDFM is shown to achieve the conventional resolution limit when a model-based reconstruction algorithm with appropriate regularization is used. These trends are also found in images of fixed fluorescently labeled brine shrimp, not adjacent to the coverslip, and fluorescently labeled mitochondria in live cells.
On regularizing the MCTDH equations of motion
NASA Astrophysics Data System (ADS)
Meyer, Hans-Dieter; Wang, Haobin
2018-03-01
The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.
NASA Astrophysics Data System (ADS)
Gulamsarwar, Syazwani; Salleh, Zabidin
2017-08-01
The purpose of this paper is to generalize the notions of Adler's topological entropy along with their several fundamental properties. A function f : X → Y is said to be R-map if f-1 (V) is regular open in X for every regular open set V in Y. Thus, we initiated a notion of topological nearly entropy for topological R-dynamical systems which is based on nearly compact relative to the space by using R-map.
Supersymmetric black holes with lens-space topology.
Kunduri, Hari K; Lucietti, James
2014-11-21
We present a new supersymmetric, asymptotically flat, black hole solution to five-dimensional supergravity. It is regular on and outside an event horizon of lens-space topology L(2,1). It is the first example of an asymptotically flat black hole with lens-space topology. The solution is characterized by a charge, two angular momenta, and a magnetic flux through a noncontractible disk region ending on the horizon, with one constraint relating these.
Frequency guided methods for demodulation of a single fringe pattern.
Wang, Haixia; Kemao, Qian
2009-08-17
Phase demodulation from a single fringe pattern is a challenging task but of interest. A frequency-guided regularized phase tracker and a frequency-guided sequential demodulation method with Levenberg-Marquardt optimization are proposed to demodulate a single fringe pattern. Demodulation path guided by the local frequency from the highest to the lowest is applied in both methods. Since critical points have low local frequency values, they are processed last so that the spurious sign problem caused by these points is avoided. These two methods can be considered as alternatives to the effective fringe follower regularized phase tracker. Demodulation results from one computer-simulated and two experimental fringe patterns using the proposed methods will be demonstrated. (c) 2009 Optical Society of America
Terminal attractors in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.
Loss surface of XOR artificial neural networks
NASA Astrophysics Data System (ADS)
Mehta, Dhagash; Zhao, Xiaojun; Bernal, Edgar A.; Wales, David J.
2018-05-01
Training an artificial neural network involves an optimization process over the landscape defined by the cost (loss) as a function of the network parameters. We explore these landscapes using optimization tools developed for potential energy landscapes in molecular science. The number of local minima and transition states (saddle points of index one), as well as the ratio of transition states to minima, grow rapidly with the number of nodes in the network. There is also a strong dependence on the regularization parameter, with the landscape becoming more convex (fewer minima) as the regularization term increases. We demonstrate that in our formulation, stationary points for networks with Nh hidden nodes, including the minimal network required to fit the XOR data, are also stationary points for networks with Nh+1 hidden nodes when all the weights involving the additional node are zero. Hence, smaller networks trained on XOR data are embedded in the landscapes of larger networks. Our results clarify certain aspects of the classification and sensitivity (to perturbations in the input data) of minima and saddle points for this system, and may provide insight into dropout and network compression.
NASA Astrophysics Data System (ADS)
Lukose, Rajan Mathew
The World Wide Web and the Internet are rapidly expanding spaces, of great economic and social significance, which offer an opportunity to study many phenomena, often previously inaccessible, on an unprecedented scale and resolution with relative ease. These phenomena are measurable on the scale of tens of millions of users and hundreds of millions of pages. By virtue of nearly complete electronic mediation, it is possible in principle to observe the time and ``spatial'' evolution of nearly all choices and interactions. This cyber-space therefore provides a view into a number of traditional research questions (from many academic disciplines) and creates its own new phenomena accessible for study. Despite its largely self-organized and dynamic nature, a number of robust quantitative regularities are found in the aggregate statistics of interesting and useful quantities. These regularities can be understood with the help of models that draw on ideas from statistical physics as well as other fields such as economics, psychology and decision theory. This thesis develops models that can account for regularities found in the statistics of Internet congestion and user surfing patterns and discusses some practical consequences. practical consequences.
Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K
2010-06-01
In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.
We'll Meet Again: Revealing Distributional and Temporal Patterns of Social Contact
Pachur, Thorsten; Schooler, Lael J.; Stevens, Jeffrey R.
2014-01-01
What are the dynamics and regularities underlying social contact, and how can contact with the people in one's social network be predicted? In order to characterize distributional and temporal patterns underlying contact probability, we asked 40 participants to keep a diary of their social contacts for 100 consecutive days. Using a memory framework previously used to study environmental regularities, we predicted that the probability of future contact would follow in systematic ways from the frequency, recency, and spacing of previous contact. The distribution of contact probability across the members of a person's social network was highly skewed, following an exponential function. As predicted, it emerged that future contact scaled linearly with frequency of past contact, proportionally to a power function with recency of past contact, and differentially according to the spacing of past contact. These relations emerged across different contact media and irrespective of whether the participant initiated or received contact. We discuss how the identification of these regularities might inspire more realistic analyses of behavior in social networks (e.g., attitude formation, cooperation). PMID:24475073
Curved noncommutative tori as Leibniz quantum compact metric spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latrémolière, Frédéric, E-mail: frederic@math.du.edu
We prove that curved noncommutative tori are Leibniz quantum compact metric spaces and that they form a continuous family over the group of invertible matrices with entries in the image of the quantum tori for the conjugation by modular conjugation operator in the regular representation, when this group is endowed with a natural length function.
ERIC Educational Resources Information Center
Whitehead, Linda C.; Ginsberg, Stacey I.
1999-01-01
Presents suggestions for creating family-like programs in large child-care centers in three areas: (1) physical environment, incorporating cozy spaces, beauty, and space for family interaction; (2) caregiving climate, such as sharing home photographs, and serving meals family style; and (3) family involvement, including regular conversations with…
Thomas uses laser range finder during rendezvous ops
2001-03-10
STS102-E-5064 (10 March 2001) --- Astronaut Andrew S.W. Thomas, STS-102 mission specialist, uses a laser ranging device on aft flight deck of the Space Shuttle Discovery. This instrument is a regularly called-on tool during rendezvous operations with the International Space Station (ISS). The photograph was recorded with a digital still camera.
Visualization of Sound Waves Using Regularly Spaced Soap Films
ERIC Educational Resources Information Center
Elias, F.; Hutzler, S.; Ferreira, M. S.
2007-01-01
We describe a novel demonstration experiment for the visualization and measurement of standing sound waves in a tube. The tube is filled with equally spaced soap films whose thickness varies in response to the amplitude of the sound wave. The thickness variations are made visible based on optical interference. The distance between two antinodes is…
Control of functional differential equations to target sets in function space
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kent, G. A.
1971-01-01
Optimal control of systems governed by functional differential equations of retarded and neutral type is considered. Problems with function space initial and terminal manifolds are investigated. Existence of optimal controls, regularity, and bang-bang properties are discussed. Necessary and sufficient conditions are derived, and several solved examples which illustrate the theory are presented.
Zhang, Zhonghao; Xiao, Rui; Shortridge, Ashton; Wu, Jiaping
2014-03-10
Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS) approach, Ripley's K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect) on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning.
NASA Astrophysics Data System (ADS)
Tan, Zhi-Zhong
2017-03-01
We study a problem of two-point resistance in a non-regular m × n cylindrical network with a zero resistor axis and two arbitrary boundaries by means of the Recursion-Transform method. This is a new problem never solved before, the Green’s function technique and the Laplacian matrix approach are invalid in this case. A disordered network with arbitrary boundaries is a basic model in many physical systems or real world systems, however looking for the exact calculation of the resistance of a binary resistor network is important but difficult in the case of the arbitrary boundaries, the boundary is like a wall or trap which affects the behavior of finite network. In this paper we obtain a general resistance formula of a non-regular m × n cylindrical network, which is composed of a single summation. Further, the current distribution is given explicitly as a byproduct of the method. As applications, several interesting results are derived by making special cases from the general formula. Supported by the Natural Science Foundation of Jiangsu Province under Grant No. BK20161278
The Regularity of Optimal Irrigation Patterns
NASA Astrophysics Data System (ADS)
Morel, Jean-Michel; Santambrogio, Filippo
2010-02-01
A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 < α < 1. The aim of this paper is to prove the regularity of paths (rivers, branches,...) when the irrigated measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.
An overview of unconstrained free boundary problems
Figalli, Alessio; Shahgholian, Henrik
2015-01-01
In this paper, we present a survey concerning unconstrained free boundary problems of type where B1 is the unit ball, Ω is an unknown open set, F1 and F2 are elliptic operators (admitting regular solutions), and is a functions space to be specified in each case. Our main objective is to discuss a unifying approach to the optimal regularity of solutions to the above matching problems, and list several open problems in this direction. PMID:26261367
Core surface magnetic field evolution 2000-2010
NASA Astrophysics Data System (ADS)
Finlay, C. C.; Jackson, A.; Gillet, N.; Olsen, N.
2012-05-01
We present new dedicated core surface field models spanning the decade from 2000.0 to 2010.0. These models, called gufm-sat, are based on CHAMP, Ørsted and SAC-C satellite observations along with annual differences of processed observatory monthly means. A spatial parametrization of spherical harmonics up to degree and order 24 and a temporal parametrization of sixth-order B-splines with 0.25 yr knot spacing is employed. Models were constructed by minimizing an absolute deviation measure of misfit along with measures of spatial and temporal complexity at the core surface. We investigate traditional quadratic or maximum entropy regularization in space, and second or third time derivative regularization in time. Entropy regularization allows the construction of models with approximately constant spectral slope at the core surface, avoiding both the divergence characteristic of the crustal field and the unrealistic rapid decay typical of quadratic regularization at degrees above 12. We describe in detail aspects of the models that are relevant to core dynamics. Secular variation and secular acceleration are found to be of lower amplitude under the Pacific hemisphere where the core field is weaker. Rapid field evolution is observed under the eastern Indian Ocean associated with the growth and drift of an intense low latitude flux patch. We also find that the present axial dipole decay arises from a combination of subtle changes in the southern hemisphere field morphology.
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
NASA Technical Reports Server (NTRS)
1995-01-01
The crew patch of STS-73, the second flight of the United States Microgravity Laboratory (USML-2), depicts the Space Shuttle Columbia in the vastness of space. In the foreground are the classic regular polyhedrons that were investigated by Plato and later Euclid. The Pythagoreans were also fascinated by the symmetrical three-dimensional objects whose sides are the same regular polygon. The tetrahedron, the cube, the octahedron, and the icosahedron were each associated with the Natural Elements of that time: fire (on this mission represented as combustion science); Earth (crystallography), air and water (fluid physics). An additional icon shown as the infinity symbol was added to further convey the discipline of fluid mechanics. The shape of the emblem represents a fifth polyhedron, a dodecahedron, which the Pythagoreans thought corresponded to a fifth element that represented the cosmos.
Phase space localization for anti-de Sitter quantum mechanics and its zero curvature limit
NASA Technical Reports Server (NTRS)
Elgradechi, Amine M.
1993-01-01
Using techniques of geometric quantization and SO(sub 0)(3,2)-coherent states, a notion of optimal localization on phase space is defined for the quantum theory of a massive and spinning particle in anti-de Sitter space time. It is shown that this notion disappears in the zero curvature limit, providing one with a concrete example of the regularizing character of the constant (nonzero) curvature of the anti-de Sitter space time. As a byproduct a geometric characterization of masslessness is obtained.
Space - A unique environment for process modeling R&D
NASA Technical Reports Server (NTRS)
Overfelt, Tony
1991-01-01
Process modeling, the application of advanced computational techniques to simulate real processes as they occur in regular use, e.g., welding, casting and semiconductor crystal growth, is discussed. Using the low-gravity environment of space will accelerate the technical validation of the procedures and enable extremely accurate determinations of the many necessary thermophysical properties. Attention is given to NASA's centers for the commercial development of space; joint ventures of universities, industries, and goverment agencies to study the unique attributes of space that offer potential for applied R&D and eventual commercial exploitation.
Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.
Skariah, Deepak G; Arigovindan, Muthuvel
2017-06-19
We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.
NASA Astrophysics Data System (ADS)
Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred
During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute accuracy of the resulting orthoimages and DSM significantly.
NASA Astrophysics Data System (ADS)
Antoniadou, Kyriaki I.; Libert, Anne-Sophie
2018-06-01
We consider a planetary system consisting of two primaries, namely a star and a giant planet, and a massless secondary, say a terrestrial planet or an asteroid, which moves under their gravitational attraction. We study the dynamics of this system in the framework of the circular and elliptic restricted three-body problem, when the motion of the giant planet describes circular and elliptic orbits, respectively. Originating from the circular family, families of symmetric periodic orbits in the 3/2, 5/2, 3/1, 4/1 and 5/1 mean-motion resonances are continued in the circular and the elliptic problems. New bifurcation points from the circular to the elliptic problem are found for each of the above resonances, and thus, new families continued from these points are herein presented. Stable segments of periodic orbits were found at high eccentricity values of the already known families considered as whole unstable previously. Moreover, new isolated (not continued from bifurcation points) families are computed in the elliptic restricted problem. The majority of the new families mainly consists of stable periodic orbits at high eccentricities. The families of the 5/1 resonance are investigated for the first time in the restricted three-body problems. We highlight the effect of stable periodic orbits on the formation of stable regions in their vicinity and unveil the boundaries of such domains in phase space by computing maps of dynamical stability. The long-term stable evolution of the terrestrial planets or asteroids is dependent on the existence of regular domains in their dynamical neighbourhood in phase space, which could host them for long-time spans. This study, besides other celestial architectures that can be efficiently modelled by the circular and elliptic restricted problems, is particularly appropriate for the discovery of terrestrial companions among the single-giant planet systems discovered so far.
Near Sun Free-Space Optical Communications from Space
NASA Technical Reports Server (NTRS)
Biswas, Abhijit; Khatri, F.; Boroson, D.
2006-01-01
Free-space optical communications offers expanded data return capacity, from probes distributed throughout the solar system and beyond. Space-borne and Earth-based optical transceivers used for communicating optically, will periodically encounter near Sun pointing. This will result in an increase in the scattered background light flux, often contributing to degraded link performance. The varying duration of near Sun pointing link operations relative to the location of space-probes, is discussed in this paper. The impact of near Sun pointing on link performance for a direct detection photon-counting communications system is analyzed for both ground- and space-based Earth receivers. Finally, impact of near Sun pointing on spaceborne optical transceivers is discussed.
Ozkan, Selda; Cha, Gihoon; Mazare, Anca; Schmuki, Patrik
2018-05-11
In the present work, we report on the use of organized TiO 2 nanotube (NT) layers with a regular intertube spacing for the growth of highly defined α-Fe 2 O 3 nano-needles in the interspace. These α-Fe 2 O 3 decorated TiO 2 NTs are then explored for Li-ion battery applications and compared to classic close-packed (CP) NTs that are decorated with various amounts of nanoscale α-Fe 2 O 3 . We show that NTs with tube-to-tube spacing allow uniform decoration of individual NTs with regular arrangements of hematite nano-needles. The tube spacing also facilitates the electrolyte penetration as well as yielding better ion diffusion. While bare CP NTs show a higher capacitance of 71 μAh cm -2 compared to bare spaced NTs with a capacitance of 54 μAh cm -2 , the hierarchical decoration with secondary metal oxide, α-Fe 2 O 3 , remarkably enhances the Li-ion battery performance. Namely, spaced NTs with α-Fe 2 O 3 decoration have an areal capacitance of 477 μAh cm -2 , i.e. they have nearly ∼8 times higher capacitance. However, the areal capacitance of CP NTs with α-Fe 2 O 3 decoration saturates at 208 μAh cm -2 , i.e. is limited to ∼3 times increase.
NASA Astrophysics Data System (ADS)
Ozkan, Selda; Cha, Gihoon; Mazare, Anca; Schmuki, Patrik
2018-05-01
In the present work, we report on the use of organized TiO2 nanotube (NT) layers with a regular intertube spacing for the growth of highly defined α-Fe2O3 nano-needles in the interspace. These α-Fe2O3 decorated TiO2 NTs are then explored for Li-ion battery applications and compared to classic close-packed (CP) NTs that are decorated with various amounts of nanoscale α-Fe2O3. We show that NTs with tube-to-tube spacing allow uniform decoration of individual NTs with regular arrangements of hematite nano-needles. The tube spacing also facilitates the electrolyte penetration as well as yielding better ion diffusion. While bare CP NTs show a higher capacitance of 71 μAh cm-2 compared to bare spaced NTs with a capacitance of 54 μAh cm-2, the hierarchical decoration with secondary metal oxide, α-Fe2O3, remarkably enhances the Li-ion battery performance. Namely, spaced NTs with α-Fe2O3 decoration have an areal capacitance of 477 μAh cm-2, i.e. they have nearly ˜8 times higher capacitance. However, the areal capacitance of CP NTs with α-Fe2O3 decoration saturates at 208 μAh cm-2, i.e. is limited to ˜3 times increase.
Connecting Archimedean and Non-Archimedean AdS/CFT
NASA Astrophysics Data System (ADS)
Parikh, Sarthak
This thesis develops a non-Archimedean analog of the usual Archimedean anti-de Sitter (AdS)/conformal field theory (CFT) correspondence. AdS space gets replaced by a Bruhat-Tits tree, which is a regular graph with no cycles. The boundary of the Bruhat-Tits tree is described by an unramified extension of the p-adic numbers, which replaces the real valued Euclidean vector space on which the CFT lives. Conformal transformations on the boundary act as linear fractional transformations. In the first part of the thesis, correlation functions are computed in the simple case of massive, interacting scalars in the bulk. They are found to be surprisingly similar to standard holographic correlation functions down to precise numerical coefficients, when expressed in terms of local zeta functions. Along the way, we show that like in the Archimedean case, CFT conformal blocks are dual to geodesic bulk diagrams, which are bulk exchange diagrams with the bulk points of integration restricted to certain geodesics. Other than these intriguing similarities, significant simplifications also arise. Notably, all derivatives disappear from the operator product expansion, and the conformal block decomposition of the four-point function. Finally, a minimal bulk action is constructed on the Bruhat-Tits tree for a single scalar field with nearest neighbor interactions, which reproduces the two-, three-, and four-point functions of the free O(N) model. In the second part, the p-adic O(N) model is studied at the interacting fixed point. Leading order results for the anomalous dimensions of low dimension operators are obtained in two separate regimes: the epsilon-expansion and the large N limit. Remarkably, formulae for anomalous dimensions in the large N limit are valid equally for Archimedean and non-Archimedean field theories, when expressed in terms of local zeta functions. Finally, higher derivative versions of the O(N) model in the Archimedean case are considered, where the general formula for anomalous dimensions obtained earlier is still valid. Analogies with two-derivative theories hint at the existence of some interesting new field theories in four real Euclidean dimensions.
Automatic Georeferencing of Astronaut Auroral Photography: Providing a New Dataset for Space Physics
NASA Astrophysics Data System (ADS)
Riechert, Maik; Walsh, Andrew P.; Taylor, Matt
2014-05-01
Astronauts aboard the International Space Station (ISS) have taken tens of thousands of photographs showing the aurora in high temporal and spatial resolution. The use of these images in research though is limited as they often miss accurate pointing and scale information. In this work we develop techniques and software libraries to automatically georeference such images, and provide a time and location-searchable database and website of those images. Aurora photographs very often include a visible starfield due to the necessarily long camera exposure times. We extend on the proof-of-concept of Walsh et al. (2012) who used starfield recognition software, Astrometry.net, to reconstruct the pointing and scale information. Previously a manual pre-processing step, the starfield can now in most cases be separated from earth and spacecraft structures successfully using image recognition. Once the pointing and scale of an image are known, latitudes and longitudes can be calculated for each pixel corner for an assumed auroral emission height. As part of this work, an open-source Python library is developed which automates the georeferencing process and aids in visualization tasks. The library facilitates the resampling of the resulting data from an irregular to a regular coordinate grid in a given pixel per degree density, it supports the export of data in CDF and NetCDF formats, and it generates polygons for drawing graphs and stereographic maps. In addition, the THEMIS all-sky imager web archive has been included as a first transparently accessible imaging source which in this case is useful when drawing maps of ISS passes over North America. The database and website are in development and will use the Python library as their base. Through this work, georeferenced auroral ISS photography is made available as a continously extended and easily accessible dataset. This provides potential not only for new studies on the aurora australis, as there are few all-sky imagers in the southern hemisphere, but also for multi-point observations of the aurora borealis by combining with THEMIS and other imager arrays.
From Discrete Space-Time to Minkowski Space: Basic Mechanisms, Methods and Perspectives
NASA Astrophysics Data System (ADS)
Finster, Felix
This survey article reviews recent results on fermion systems in discrete space-time and corresponding systems in Minkowski space. After a basic introduction to the discrete setting, we explain a mechanism of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure. As methods to study the transition between discrete space-time and Minkowski space, we describe a lattice model for a static and isotropic space-time, outline the analysis of regularization tails of vacuum Dirac sea configurations, and introduce a Lorentz invariant action for the masses of the Dirac seas. We mention the method of the continuum limit, which allows to analyze interacting systems. Open problems are discussed.
A genetic algorithm approach to estimate glacier mass variations from GRACE data
NASA Astrophysics Data System (ADS)
Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten
2017-04-01
The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.
NASA Astrophysics Data System (ADS)
Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng
2014-06-01
The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.
2012-12-07
Acquired by NASA Terra spacecraft, this image shows Heilongjiang, a province of China located in the northeastern part of the country. Farms are small and long skinny rectangles in shape, surrounding regularly spaced villages.
A Report on Women West Point Graduates Assuming Nontraditional Roles.
ERIC Educational Resources Information Center
Yoder, Janice D.; Adams, Jerome
In 1980 the first women graduated from the military and college training program at West Point. To investigate the progress of both male and female graduates as they assume leadership roles in the regular Army, 35 women and 113 men responded to a survey assessing career involvement and planning, commitment and adjustment, and satisfaction.…
orbit around L2, the second Lagrange point of the Earth-Sun system, which is about 1.5 million orbits L2, it makes one rotation about the Sun per year. The spacecraft spin axis has to be rotated at the same rate in order to remain Sun pointed. This is achieved by making regular manoeuvres that will
NASA Technical Reports Server (NTRS)
Lin, Richard Y.; Mann, Kenneth E.; Laskin, Robert A.; Sirlin, Samuel W.
1987-01-01
Technology assessment is performed for pointing systems that accommodate payloads of large mass and large dimensions. Related technology areas are also examined. These related areas include active thermal lines or power cables across gimbals, new materials for increased passive damping, tethered pointing, and inertially reacting pointing systems. Conclusions, issues and concerns, and recommendations regarding the status and development of large pointing systems for space applications are made based on the performed assessments.
Twistor interpretation of slice regular functions
NASA Astrophysics Data System (ADS)
Altavilla, Amedeo
2018-01-01
Given a slice regular function f : Ω ⊂ H → H, with Ω ∩ R ≠ ∅, it is possible to lift it to surfaces in the twistor space CP3 of S4 ≃ H ∪ { ∞ } (see Gentili et al., 2014). In this paper we show that the same result is true if one removes the hypothesis Ω ∩ R ≠ ∅ on the domain of the function f. Moreover we find that if a surface S ⊂CP3 contains the image of the twistor lift of a slice regular function, then S has to be ruled by lines. Starting from these results we find all the projective classes of algebraic surfaces up to degree 3 in CP3 that contain the lift of a slice regular function. In addition we extend and further explore the so-called twistor transform, that is a curve in Gr2(C4) which, given a slice regular function, returns the arrangement of lines whose lift carries on. With the explicit expression of the twistor lift and of the twistor transform of a slice regular function we exhibit the set of slice regular functions whose twistor transform describes a rational line inside Gr2(C4) , showing the role of slice regular functions not defined on R. At the end we study the twistor lift of a particular slice regular function not defined over the reals. This example shows the effectiveness of our approach and opens some questions.
Well-posedness of the Prandtl equation with monotonicity in Sobolev spaces
NASA Astrophysics Data System (ADS)
Chen, Dongxiang; Wang, Yuxi; Zhang, Zhifei
2018-05-01
By using the paralinearization technique, we prove the well-posedness of the Prandtl equation for monotonic data in anisotropic Sobolev space with exponential weight and low regularity. The proof is very elementary, thus is expected to provide a new possible way for the zero-viscosity limit problem of the Navier-Stokes equations with the non-slip boundary condition.
Neural Networks Based Approach to Enhance Space Hardware Reliability
NASA Technical Reports Server (NTRS)
Zebulum, Ricardo S.; Thakoor, Anilkumar; Lu, Thomas; Franco, Lauro; Lin, Tsung Han; McClure, S. S.
2011-01-01
This paper demonstrates the use of Neural Networks as a device modeling tool to increase the reliability analysis accuracy of circuits targeted for space applications. The paper tackles a number of case studies of relevance to the design of Flight hardware. The results show that the proposed technique generates more accurate models than the ones regularly used to model circuits.
Encoding Dissimilarity Data for Statistical Model Building.
Wahba, Grace
2010-12-01
We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.
Taniguchi, Hideki; Akiyama, Masako; Gomi, Ikuko; Kimura, Mamiko
2015-01-01
In the present study, we defined the state of pre-dehydration (PD) as the suspected loss of body fluids, not accompanied by subjective symptoms, where the serum osmotic pressure ranges from 292 to 300 mOsm/kg・H2O. The goal of this study was to develop a PD assessment sheet based on the results of sensitivity and specificity testing among elderly individuals. We evaluated the serum osmotic pressure in 70 subjects >65 years of age who regularly visited an elderly-care institution. We then determined the associations between the serum osmotic pressure and various dehydration-related diagnostic factors identified in our previous study. Risk factors for dehydration were evaluated using a logistic regression analysis and allotted points according to the odds ratio. PD was confirmed in 15 subjects (21.4%) using measurements of the serum osmotic pressure. We developed a PD assessment sheet that consisted of six items: (1) Female gender (4 points), (2) BMI≥25 kg/m2 (5 points), (3) Diuretics (6 points), (4) Laxatives (2 points), (5) Dry skin (2 points) and (6) A desire to consume cold drinks or foods (2 points). The cutoff value at which the risk of PD was high was set at 9 points (total of 21 points) (sensitivity 0.73, specificity 0.82; P<0.001). In this study, we found that 21.4% of the elderly subjects had PD. Using these data, we developed an effective noninvasive tool for detecting PD among elderly individuals.
NASA Astrophysics Data System (ADS)
Atzberger, C.
2013-12-01
The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion
Hydrograph Shape Controls Channel Morphology and Organization in a Sand-Gravel Flume
NASA Astrophysics Data System (ADS)
Hempel, L. A.; Grant, G.; Hassan, M. A.; Eaton, B. C.
2016-12-01
A fundamental research question in fluvial geomorphology is to understand what flows shape river channels. Historically, the prevailing view has been that channel dimensions adjust to a so-termed "dominant discharge", which is often approximated as the bankfull flow. But using a single flow to reference the geomorphic effectiveness of an entire flow regime discounts many observations showing that different flows control different channel processes. Some flows entrain fine sediment, some entrain the full size distribution of bed sediment; some destabilize or build bars, some erode the banks, and so forth. To explore the relation between the full flow regime and channel morphology, we conducted a series of flume experiments to examine how hydrographs with different shapes, durations, and magnitudes result in different degrees of channel organization, which we define in terms of the regularity, spacing and architecture of self-formed channel features, such as bed patches, geometry and spacing of bedforms, and channel planform. Our experiments were run in a 12m long adjustable-width flume that developed a self-formed meandering, pool-riffle pattern. We found that hydrograph shape does control channel organization. In particular, channels formed by hydrographs with slower rising limbs and broader peaks were more organized than those formed by flashier hydrographs. To become organized, hydrographs needed to exceed a minimum flow threshold, defined by the intensity of sediment transport; below which the channel lacked bedforms and a regular meander pattern. Above an upper flow threshold, bars became disorganized and the channel planform transitioned towards braiding. Field studies of channels with different flow regimes but located in a similar physiographic setting support our experimental findings. Taken together, this work points to the importance of the hydrograph as a fundamental control on channel morphology, and offers the prospect of better understanding how changing hydrologic regimes, either through climate, land use, or dams, translates into geomorphic changes.
78 FR 34557 - Establishment of Class E Airspace; Sanibel, FL
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-10
... the FAA found that the heliport coordinates were incorrectly listed as point in space coordinates; and point in space coordinates were inadvertently omitted. This action makes the correction. Except for.... Controlled airspace within a 6-mile radius of the point in space coordinates of the heliport is necessary for...
78 FR 33967 - Establishment of Class E Airspace; Captiva, FL
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-06
... the FAA found that the heliport coordinates were incorrectly listed as point in space coordinates; and point in space coordinates were inadvertently omitted. This action makes the correction. Except for... Heliport. Controlled airspace within a 6-mile radius of the point in space coordinates of the heliport is...
[Analysis of palpation laws of muscle meridian focus on knee osteoarthritis].
Zhang, Shu-Jian; Zhang, Xiao-Qing; Han, Yu; Li, Chun-Ri; Dong, Bao-Qiang
2012-03-01
To explore the distribution regulars of proximal and distal focus of muscle meridian regions in knee osteoarthritis patients. Seven hundred and sixty-five knees were selected in 516 cases of knee osteoarthritis. Under the guidance of muscle meridian theory, with the anatomical features of muscle meridian focus, the frequency and the location where the proximal and distal focus of muscle meridian regions appeared were calculated by palpation. Of all the points, 11 835 points of proximal focus of muscle meridian regions were found out by palpation, and 9455 points of distal focus of muscle meridian regions were found out. The percentages of the frequency that the focus of muscle meridian of Foot-Yangming, Foot-Taiyang, Foot-Shaoyang and three foot Yin meridians appeared at proximal points of knee were 37.1% (4388/11 835), 34.9% (4127/11 835), 9.5% (1129/11 835) and 18.5% (2191/11 835) respectively; and the percentage of the frequency that the focus of muscle meridian appeared at distal points of knee were 24.7% (2333/9455), 25.2% (2380/9455), 28.5% (2700/9455) and 21.6% (2042/9455). The proximal and distal focus of muscle meridian in knee osteoarthritis patients are closely related with anatomy structure and biomechanical characteristics; the distribution regulars of focus of muscle meridians study provides evidence for the selection of effective treatment points from different clinical acupuncture therapies.
[Valuating public health in some zoos in Colombia. Phase 1: designing and validating instruments].
Agudelo-Suárez, Angela N; Villamil-Jiménez, Luis C
2009-10-01
Designing and validating instruments for identifying public health problems in some zoological parks in Colombia, thereby allowing them to be evaluated. Four instruments were designed and validated along with the participation of five zoos. The instruments were validated regarding appearance, content, sensitivity to change, reliability tests and determining the tools' usefulness. An evaluation scale was created which assigned a maximum of 400 points, having the following evaluation intervals: 350-400 points meant good public health management, 100-349 points for regular management and 0-99 points for deficient management. The instruments were applied to the five zoos as part of the validation, forming a base-line for future evaluation of public health in them. Four valid and useful instruments were obtained for evaluating public health in zoos in Colombia. The five zoos presented regular public health management. The base-line obtained when validating the instruments led to identifying strengths and weaknesses regarding public health management in the zoos. The instruments obtained generally and specifically evaluated public health management; they led to diagnosing, identifying, quantifying and scoring zoos in Colombia in terms of public health. The base-line provided a starting point for making comparisons and enabling future follow-up of public health in Colombian zoos.
Observational Model for Precision Astrometry with the Space Interferometry Mission
NASA Technical Reports Server (NTRS)
Turyshev, Slava G.; Milman, Mark H.
2000-01-01
The Space Interferometry Mission (SIM) is a space-based 10-m baseline Michelson optical interferometer operating in the visible waveband that is designed to achieve astrometric accuracy in the single digits of the microarcsecond domain. Over a narrow field of view SIM is expected to achieve a mission accuracy of 1 microarcsecond. In this mode SIM will search for planetary companions to nearby stars by detecting the astrometric "wobble" relative to a nearby reference star. In its wide-angle mode, SIM will provide 4 microarcsecond precision absolute position measurements of stars, with parallaxes to comparable accuracy, at the end of its 5-year mission. The expected proper motion accuracy is around 3 microarcsecond/year, corresponding to a transverse velocity of 10 m/ s at a distance of 1 kpc. The basic astrometric observable of the SIM instrument is the pathlength delay. This measurement is made by a combination of internal metrology measurements that determine the distance the starlight travels through the two arms of the interferometer, and a measurement of the white light stellar fringe to find the point of equal pathlength. Because this operation requires a non-negligible integration time, the interferometer baseline vector is not stationary over this time period, as its absolute length and orientation are time varying. This paper addresses how the time varying baseline can be "regularized" so that it may act as a single baseline vector for multiple stars, as required for the solution of the astrometric equations.
NASA Astrophysics Data System (ADS)
Whitchurch, Brandon; Kevrekidis, Panayotis G.; Koukouloyannis, Vassilis
2018-01-01
In this work we study the dynamical behavior of two interacting vortex pairs, each one of them consisting of two point vortices with opposite circulation in the two-dimensional plane. The vortices are considered as effective particles and their interaction can be described in classical mechanics terms. We first construct a Poincaré section, for a typical value of the energy, in order to acquire a picture of the structure of the phase space of the system. We divide the phase space in different regions which correspond to qualitatively distinct motions and we demonstrate its different temporal evolution in the "real" vortex space. Our main emphasis is on the leapfrogging periodic orbit, around which we identify a region that we term the "leapfrogging envelope" which involves mostly regular motions, such as higher order periodic and quasiperiodic solutions. We also identify the chaotic region of the phase plane surrounding the leapfrogging envelope as well as the so-called walkabout and braiding motions. Varying the energy as our control parameter, we construct a bifurcation tree of the main leapfrogging solution and its instabilities, as well as the instabilities of its daughter branches. We identify the symmetry-breaking instability of the leapfrogging solution (in line with earlier works), and also obtain the corresponding asymmetric branches of periodic solutions. We then characterize their own instabilities (including period doubling ones) and bifurcations in an effort to provide a more systematic perspective towards the types of motions available to this dynamical system.
On the Five-Moment Hamburger Maximum Entropy Reconstruction
NASA Astrophysics Data System (ADS)
Summy, D. P.; Pullin, D. I.
2018-05-01
We consider the Maximum Entropy Reconstruction (MER) as a solution to the five-moment truncated Hamburger moment problem in one dimension. In the case of five monomial moment constraints, the probability density function (PDF) of the MER takes the form of the exponential of a quartic polynomial. This implies a possible bimodal structure in regions of moment space. An analytical model is developed for the MER PDF applicable near a known singular line in a centered, two-component, third- and fourth-order moment (μ _3 , μ _4 ) space, consistent with the general problem of five moments. The model consists of the superposition of a perturbed, centered Gaussian PDF and a small-amplitude packet of PDF-density, called the outlying moment packet (OMP), sitting far from the mean. Asymptotic solutions are obtained which predict the shape of the perturbed Gaussian and both the amplitude and position on the real line of the OMP. The asymptotic solutions show that the presence of the OMP gives rise to an MER solution that is singular along a line in (μ _3 , μ _4 ) space emanating from, but not including, the point representing a standard normal distribution, or thermodynamic equilibrium. We use this analysis of the OMP to develop a numerical regularization of the MER, creating a procedure we call the Hybrid MER (HMER). Compared with the MER, the HMER is a significant improvement in terms of robustness and efficiency while preserving accuracy in its prediction of other important distribution features, such as higher order moments.
Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction
Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.
2016-01-01
X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902
Reconstructing Images in Astrophysics, an Inverse Problem Point of View
NASA Astrophysics Data System (ADS)
Theys, Céline; Aime, Claude
2016-04-01
After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem
A Space Based Solar Power Satellite System
NASA Astrophysics Data System (ADS)
Engel, J. M.; Polling, D.; Ustamujic, F.; Yaldiz, R.; et al.
2002-01-01
(SPoTS) supplying other satellites with energy. SPoTS is due to be commercially viable and operative in 2020. of Technology designed the SPoTS during a full-time design period of six weeks as a third year final project. The team, organized according to the principles of systems engineering, first conducted a literature study on space wireless energy transfer to select the most suitable candidates for use on the SPoTS. After that, several different system concepts have been generated and evaluated, the most promising concept being worked out in greater detail. km altitude. Each SPoTS satellite has a 50m diameter inflatable solar collector that focuses all received sunlight. Then, the received sunlight is further redirected by means of four pointing mirrors toward four individual customer satellites. A market-analysis study showed, that providing power to geo-stationary communication satellites during their eclipse would be most beneficial. At arrival at geo-stationary orbit, the focused beam has expended to such an extent that its density equals one solar flux. This means that customer satellites can continue to use their regular solar arrays during their eclipse for power generation, resulting in a satellite battery mass reduction. the customer satellites in geo-stationary orbit, the transmitted energy beams needs to be pointed with very high accuracy. Computations showed that for this degree of accuracy, sensors are needed, which are not mainstream nowadays. Therefore further research must be conducted in this area in order to make these high-accuracy-pointing systems commercially attractive for use on the SPoTS satellites around 2020. Total 20-year system lifetime cost for 18 SPoT satellites are estimated at approximately USD 6 billion [FY2001]. In order to compete with traditional battery-based satellite power systems or possible ground based wireless power transfer systems the price per kWh for the customer must be significantly lower than the present one. Based on the expected revenues from about 300 customers, SPoTS needs a significant contribution from public funding to be commercial viable. However, even though the system might seem to be a huge investment first, it provides a unique steppingstone for future space based wireless transfer of energy to the Earth. Also the public funding is considered as an interest free loan and is due to be paid back over de lifetime period of SPoTS. These features make the SPoTS very attractive in comparison to other space projects of the same science field.
NASA Astrophysics Data System (ADS)
Zhang, Jian-dong; Chen, Bin
2017-01-01
The kinematic space could play a key role in constructing the bulk geometry from dual CFT. In this paper, we study the kinematic space from geometric points of view, without resorting to differential entropy. We find that the kinematic space could be intrinsically defined in the embedding space. For each oriented geodesic in the Poincaré disk, there is a corresponding point in the kinematic space. This point is the tip of the causal diamond of the disk whose intersection with the Poincaré disk determines the geodesic. In this geometric construction, the causal structure in the kinematic space can be seen clearly. Moreover, we find that every transformation in the SL(2,R) leads to a geodesic in the kinematic space. In particular, for a hyperbolic transformation defining a BTZ black hole, it is a timelike geodesic in the kinematic space. We show that the horizon length of the static BTZ black hole could be computed by the geodesic length of corresponding points in the kinematic space. Furthermore, we discuss the fundamental regions in the kinematic space for the BTZ blackhole and multi-boundary wormholes.
Partial regularity of weak solutions to a PDE system with cubic nonlinearity
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Xu, Xiangsheng
2018-04-01
In this paper we investigate regularity properties of weak solutions to a PDE system that arises in the study of biological transport networks. The system consists of a possibly singular elliptic equation for the scalar pressure of the underlying biological network coupled to a diffusion equation for the conductance vector of the network. There are several different types of nonlinearities in the system. Of particular mathematical interest is a term that is a polynomial function of solutions and their partial derivatives and this polynomial function has degree three. That is, the system contains a cubic nonlinearity. Only weak solutions to the system have been shown to exist. The regularity theory for the system remains fundamentally incomplete. In particular, it is not known whether or not weak solutions develop singularities. In this paper we obtain a partial regularity theorem, which gives an estimate for the parabolic Hausdorff dimension of the set of possible singular points.
A local chaotic quasi-attractor in a kicked rotator
NASA Astrophysics Data System (ADS)
Jiang, Yu-Mei; Lu, Yun-Qing; Zhao, Jin-Gang; Wang, Xu-Ming; Chen, He-Sheng; He, Da-Ren
2002-03-01
Recently, Hu et al. reported a diffusion in a special kind of stochastic web observed in a kicked rotator described by a discontinuous but invertible two-dimensional area-preserving map^1. We modified the function form of the system so that the period of the kicking force becomes different in two parts of the space, and the conservative map becomes both discontinuous and noninvertible. It is found that when the ratio between both periods becomes smaller or larger than (but near to) 1, the chaotic diffusion in the web transfers to chaotic transients, which are attracted to the elliptic islands those existed inside the holes of the web earlier when the ratio equals 1. As soon as reaching the islands, the iteration follows the conservative laws exactly. Therefore we address these elliptic islands as "regular quasi-attractor"^2. When the ratio increases further and becomes far from 1, all the elliptic islands disappear and a local chaotic quasi-attractor appears instead. It attracts the iterations starting from most initial points in the phase space. This behavior may be considered as a kind of "confinement" of chaotic motion of a particle. ^1B. Hu et al., Phys.Rev.Lett.,82(1999)4224. ^2J. Wang et al., Phys.Rev.E, 64(2001)026202.
Fixed points of contractive mappings in b-metric-like spaces.
Hussain, Nawab; Roshan, Jamal Rezaei; Parvaneh, Vahid; Kadelburg, Zoran
2014-01-01
We discuss topological structure of b-metric-like spaces and demonstrate a fundamental lemma for the convergence of sequences. As an application we prove certain fixed point results in the setup of such spaces for different types of contractive mappings. Finally, some periodic point results in b-metric-like spaces are obtained. Two examples are presented in order to verify the effectiveness and applicability of our main results.
Sloan Digital Sky Survey IV: Mapping the Milky Way, nearby galaxies, and the distant universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela
Here, we describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (medianmore » $$z\\sim 0.03$$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $$z\\sim 0.6$$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.« less
Sloan Digital Sky Survey IV: Mapping the Milky Way, nearby galaxies, and the distant universe
Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela; ...
2017-06-29
Here, we describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (medianmore » $$z\\sim 0.03$$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $$z\\sim 0.6$$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.« less
Spectral action models of gravity on packed swiss cheese cosmology
NASA Astrophysics Data System (ADS)
Ball, Adam; Marcolli, Matilde
2016-06-01
We present a model of (modified) gravity on spacetimes with fractal structure based on packing of spheres, which are (Euclidean) variants of the packed swiss cheese cosmology models. As the action functional for gravity we consider the spectral action of noncommutative geometry, and we compute its expansion on a space obtained as an Apollonian packing of three-dimensional spheres inside a four-dimensional ball. Using information from the zeta function of the Dirac operator of the spectral triple, we compute the leading terms in the asymptotic expansion of the spectral action. They consist of a zeta regularization of the divergent sum of the leading terms of the spectral actions of the individual spheres in the packing. This accounts for the contribution of points 1 and 3 in the dimension spectrum (as in the case of a 3-sphere). There is an additional term coming from the residue at the additional point in the real dimension spectrum that corresponds to the packing constant, as well as a series of fluctuations coming from log-periodic oscillations, created by the points of the dimension spectrum that are off the real line. These terms detect the fractality of the residue set of the sphere packing. We show that the presence of fractality influences the shape of the slow-roll potential for inflation, obtained from the spectral action. We also discuss the effect of truncating the fractal structure at a certain scale related to the energy scale in the spectral action.
Sloan Digital Sky Survey IV: Mapping the Milky Way, Nearby Galaxies, and the Distant Universe
NASA Astrophysics Data System (ADS)
Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela; Albareti, Franco D.; Allende Prieto, Carlos; Almeida, Andres; Alonso-García, Javier; Anders, Friedrich; Anderson, Scott F.; Andrews, Brett; Aquino-Ortíz, Erik; Aragón-Salamanca, Alfonso; Argudo-Fernández, Maria; Armengaud, Eric; Aubourg, Eric; Avila-Reese, Vladimir; Badenes, Carles; Bailey, Stephen; Barger, Kathleen A.; Barrera-Ballesteros, Jorge; Bartosz, Curtis; Bates, Dominic; Baumgarten, Falk; Bautista, Julian; Beaton, Rachael; Beers, Timothy C.; Belfiore, Francesco; Bender, Chad F.; Berlind, Andreas A.; Bernardi, Mariangela; Beutler, Florian; Bird, Jonathan C.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blomqvist, Michael; Bolton, Adam S.; Boquien, Médéric; Borissova, Jura; van den Bosch, Remco; Bovy, Jo; Brandt, William N.; Brinkmann, Jonathan; Brownstein, Joel R.; Bundy, Kevin; Burgasser, Adam J.; Burtin, Etienne; Busca, Nicolás G.; Cappellari, Michele; Delgado Carigi, Maria Leticia; Carlberg, Joleen K.; Carnero Rosell, Aurelio; Carrera, Ricardo; Chanover, Nancy J.; Cherinka, Brian; Cheung, Edmond; Gómez Maqueo Chew, Yilen; Chiappini, Cristina; Doohyun Choi, Peter; Chojnowski, Drew; Chuang, Chia-Hsun; Chung, Haeun; Cirolini, Rafael Fernando; Clerc, Nicolas; Cohen, Roger E.; Comparat, Johan; da Costa, Luiz; Cousinou, Marie-Claude; Covey, Kevin; Crane, Jeffrey D.; Croft, Rupert A. C.; Cruz-Gonzalez, Irene; Garrido Cuadra, Daniel; Cunha, Katia; Damke, Guillermo J.; Darling, Jeremy; Davies, Roger; Dawson, Kyle; de la Macorra, Axel; Dell'Agli, Flavia; De Lee, Nathan; Delubac, Timothée; Di Mille, Francesco; Diamond-Stanic, Aleks; Cano-Díaz, Mariana; Donor, John; Downes, Juan José; Drory, Niv; du Mas des Bourboux, Hélion; Duckworth, Christopher J.; Dwelly, Tom; Dyer, Jamie; Ebelke, Garrett; Eigenbrot, Arthur D.; Eisenstein, Daniel J.; Emsellem, Eric; Eracleous, Mike; Escoffier, Stephanie; Evans, Michael L.; Fan, Xiaohui; Fernández-Alvar, Emma; Fernandez-Trincado, J. G.; Feuillet, Diane K.; Finoguenov, Alexis; Fleming, Scott W.; Font-Ribera, Andreu; Fredrickson, Alexander; Freischlad, Gordon; Frinchaboy, Peter M.; Fuentes, Carla E.; Galbany, Lluís; Garcia-Dias, R.; García-Hernández, D. A.; Gaulme, Patrick; Geisler, Doug; Gelfand, Joseph D.; Gil-Marín, Héctor; Gillespie, Bruce A.; Goddard, Daniel; Gonzalez-Perez, Violeta; Grabowski, Kathleen; Green, Paul J.; Grier, Catherine J.; Gunn, James E.; Guo, Hong; Guy, Julien; Hagen, Alex; Hahn, ChangHoon; Hall, Matthew; Harding, Paul; Hasselquist, Sten; Hawley, Suzanne L.; Hearty, Fred; Gonzalez Hernández, Jonay I.; Ho, Shirley; Hogg, David W.; Holley-Bockelmann, Kelly; Holtzman, Jon A.; Holzer, Parker H.; Huehnerhoff, Joseph; Hutchinson, Timothy A.; Hwang, Ho Seong; Ibarra-Medel, Héctor J.; da Silva Ilha, Gabriele; Ivans, Inese I.; Ivory, KeShawn; Jackson, Kelly; Jensen, Trey W.; Johnson, Jennifer A.; Jones, Amy; Jönsson, Henrik; Jullo, Eric; Kamble, Vikrant; Kinemuchi, Karen; Kirkby, David; Kitaura, Francisco-Shu; Klaene, Mark; Knapp, Gillian R.; Kneib, Jean-Paul; Kollmeier, Juna A.; Lacerna, Ivan; Lane, Richard R.; Lang, Dustin; Law, David R.; Lazarz, Daniel; Lee, Youngbae; Le Goff, Jean-Marc; Liang, Fu-Heng; Li, Cheng; Li, Hongyu; Lian, Jianhui; Lima, Marcos; Lin, Lihwai; Lin, Yen-Ting; Bertran de Lis, Sara; Liu, Chao; de Icaza Lizaola, Miguel Angel C.; Long, Dan; Lucatello, Sara; Lundgren, Britt; MacDonald, Nicholas K.; Deconto Machado, Alice; MacLeod, Chelsea L.; Mahadevan, Suvrath; Geimba Maia, Marcio Antonio; Maiolino, Roberto; Majewski, Steven R.; Malanushenko, Elena; Malanushenko, Viktor; Manchado, Arturo; Mao, Shude; Maraston, Claudia; Marques-Chaves, Rui; Masseron, Thomas; Masters, Karen L.; McBride, Cameron K.; McDermid, Richard M.; McGrath, Brianne; McGreer, Ian D.; Medina Peña, Nicolás; Melendez, Matthew; Merloni, Andrea; Merrifield, Michael R.; Meszaros, Szabolcs; Meza, Andres; Minchev, Ivan; Minniti, Dante; Miyaji, Takamitsu; More, Surhud; Mulchaey, John; Müller-Sánchez, Francisco; Muna, Demitri; Munoz, Ricardo R.; Myers, Adam D.; Nair, Preethi; Nandra, Kirpal; Correa do Nascimento, Janaina; Negrete, Alenka; Ness, Melissa; Newman, Jeffrey A.; Nichol, Robert C.; Nidever, David L.; Nitschelm, Christian; Ntelis, Pierros; O'Connell, Julia E.; Oelkers, Ryan J.; Oravetz, Audrey; Oravetz, Daniel; Pace, Zach; Padilla, Nelson; Palanque-Delabrouille, Nathalie; Alonso Palicio, Pedro; Pan, Kaike; Parejko, John K.; Parikh, Taniya; Pâris, Isabelle; Park, Changbom; Patten, Alim Y.; Peirani, Sebastien; Pellejero-Ibanez, Marcos; Penny, Samantha; Percival, Will J.; Perez-Fournon, Ismael; Petitjean, Patrick; Pieri, Matthew M.; Pinsonneault, Marc; Pisani, Alice; Poleski, Radosław; Prada, Francisco; Prakash, Abhishek; Queiroz, Anna Bárbara de Andrade; Raddick, M. Jordan; Raichoor, Anand; Barboza Rembold, Sandro; Richstein, Hannah; Riffel, Rogemar A.; Riffel, Rogério; Rix, Hans-Walter; Robin, Annie C.; Rockosi, Constance M.; Rodríguez-Torres, Sergio; Roman-Lopes, A.; Román-Zúñiga, Carlos; Rosado, Margarita; Ross, Ashley J.; Rossi, Graziano; Ruan, John; Ruggeri, Rossana; Rykoff, Eli S.; Salazar-Albornoz, Salvador; Salvato, Mara; Sánchez, Ariel G.; Aguado, D. S.; Sánchez-Gallego, José R.; Santana, Felipe A.; Santiago, Basílio Xavier; Sayres, Conor; Schiavon, Ricardo P.; da Silva Schimoia, Jaderson; Schlafly, Edward F.; Schlegel, David J.; Schneider, Donald P.; Schultheis, Mathias; Schuster, William J.; Schwope, Axel; Seo, Hee-Jong; Shao, Zhengyi; Shen, Shiyin; Shetrone, Matthew; Shull, Michael; Simon, Joshua D.; Skinner, Danielle; Skrutskie, M. F.; Slosar, Anže; Smith, Verne V.; Sobeck, Jennifer S.; Sobreira, Flavia; Somers, Garrett; Souto, Diogo; Stark, David V.; Stassun, Keivan; Stauffer, Fritz; Steinmetz, Matthias; Storchi-Bergmann, Thaisa; Streblyanska, Alina; Stringfellow, Guy S.; Suárez, Genaro; Sun, Jing; Suzuki, Nao; Szigeti, Laszlo; Taghizadeh-Popp, Manuchehr; Tang, Baitian; Tao, Charling; Tayar, Jamie; Tembe, Mita; Teske, Johanna; Thakar, Aniruddha R.; Thomas, Daniel; Thompson, Benjamin A.; Tinker, Jeremy L.; Tissera, Patricia; Tojeiro, Rita; Hernandez Toledo, Hector; de la Torre, Sylvain; Tremonti, Christy; Troup, Nicholas W.; Valenzuela, Octavio; Martinez Valpuesta, Inma; Vargas-González, Jaime; Vargas-Magaña, Mariana; Vazquez, Jose Alberto; Villanova, Sandro; Vivek, M.; Vogt, Nicole; Wake, David; Walterbos, Rene; Wang, Yuting; Weaver, Benjamin Alan; Weijmans, Anne-Marie; Weinberg, David H.; Westfall, Kyle B.; Whelan, David G.; Wild, Vivienne; Wilson, John; Wood-Vasey, W. M.; Wylezalek, Dominika; Xiao, Ting; Yan, Renbin; Yang, Meng; Ybarra, Jason E.; Yèche, Christophe; Zakamska, Nadia; Zamora, Olga; Zarrouk, Pauline; Zasowski, Gail; Zhang, Kai; Zhao, Gong-Bo; Zheng, Zheng; Zheng, Zheng; Zhou, Xu; Zhou, Zhi-Min; Zhu, Guangtun B.; Zoccali, Manuela; Zou, Hu
2017-07-01
We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (median z˜ 0.03). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between z˜ 0.6 and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.
Motor Imagery and Tennis Serve Performance: The External Focus Efficacy
Guillot, Aymeric; Desliens, Simon; Rouyer, Christelle; Rogowski, Isabelle
2013-01-01
There is now ample evidence that motor imagery (MI) contributes to enhance motor performance. Previous research also demonstrated that directing athletes’ attention to the effects of their movements on the environment is more effective than focusing on the action per se. The present study aimed therefore at evaluating whether adopting an external focus during MI contributes to enhance tennis serve performance. Twelve high-level young tennis players were included in a test-retest procedure. The effects of regular training were first evaluated. Then, players were subjected to a MI intervention during which they mentally focused on ball trajectory and specifically visualized the space above the net where the serve can be successfully hit. Serve performance was evaluated during both a validated serve test and a real match. The main results showed a significant increase in accuracy and velocity during the ecological serve test after MI practice, as well as a significant improvement in successful first serves and won points during the match. Present data therefore confirmed the efficacy of MI in combination of physical practice to improve tennis serve performance, and further provided evidence that it is feasible to adopt external attentional focus during MI. Practical applications are discussed. Key Points Motor imagery contributes to enhance tennis serve performance. Data provided evidence of the benefits of adopting an external focus of attention during imagery. Results showed significant improvement in successful first serves and won points during a real match. PMID:24149813
Gibbon travel paths are goal oriented.
Asensio, Norberto; Brockelman, Warren Y; Malaivijitnond, Suchinda; Reichard, Ulrich H
2011-05-01
Remembering locations of food resources is critical for animal survival. Gibbons are territorial primates which regularly travel through small and stable home ranges in search of preferred, limited and patchily distributed resources (primarily ripe fruit). They are predicted to profit from an ability to memorize the spatial characteristics of their home range and may increase their foraging efficiency by using a 'cognitive map' either with Euclidean or with topological properties. We collected ranging and feeding data from 11 gibbon groups (Hylobates lar) to test their navigation skills and to better understand gibbons' 'spatial intelligence'. We calculated the locations at which significant travel direction changes occurred using the change-point direction test and found that these locations primarily coincided with preferred fruit sources. Within the limits of biologically realistic visibility distances observed, gibbon travel paths were more efficient in detecting known preferred food sources than a heuristic travel model based on straight travel paths in random directions. Because consecutive travel change-points were far from the gibbons' sight, planned movement between preferred food sources was the most parsimonious explanation for the observed travel patterns. Gibbon travel appears to connect preferred food sources as expected under the assumption of a good mental representation of the most relevant sources in a large-scale space.
NASA Astrophysics Data System (ADS)
Chidananda, H.; Reddy, T. Hanumantha
2017-06-01
This paper presents a natural representation of numerical digit(s) using hand activity analysis based on number of fingers out stretched for each numerical digit in sequence extracted from a video. The analysis is based on determining a set of six features from a hand image. The most important features used from each frame in a video are the first fingertip from top, palm-line, palm-center, valley points between the fingers exists above the palm-line. Using this work user can convey any number of numerical digits using right or left or both the hands naturally in a video. Each numerical digit ranges from 0 to9. Hands (right/left/both) used to convey digits can be recognized accurately using the valley points and with this recognition whether the user is a right / left handed person in practice can be analyzed. In this work, first the hand(s) and face parts are detected by using YCbCr color space and face part is removed by using ellipse based method. Then, the hand(s) are analyzed to recognize the activity that represents a series of numerical digits in a video. This work uses pixel continuity algorithm using 2D coordinate geometry system and does not use regular use of calculus, contours, convex hull and datasets.
On the Hardy Space Theory of Compensated Compactness Quantities
NASA Astrophysics Data System (ADS)
Lindberg, Sauli
2017-05-01
We make progress on a problem of Coifman et al. (J Math Pures Appl (9) 72(3): 247-286, 1993) by showing that the Jacobian operator J does not map {W^{1,n}(Rn, Rn) onto the Hardy space H1(Rn) for any {n ≥ 2}. The related question about the surjectivity of {J : dot{W}^{1,n}(Rn,Rn) to H1(Rn) is still open. The second main result and its variants reduce the proof of H1 regularity of a large class of compensated compactness quantities to an integration by parts or easy arithmetic, and applications are presented. Furthermore, we exhibit a class of nonlinear partial differential operators in which weak sequential continuity is a strictly stronger condition than H1 regularity, shedding light on another question of Coifman, Lions, Meyer and Semmes.
SIC-POVMS and MUBS: Geometrical Relationships in Prime Dimension
NASA Astrophysics Data System (ADS)
Appleby, D. M.
2009-03-01
The paper concerns Weyl-Heisenberg covariant SIC-POVMs (symmetric informationally complete positive operator valued measures) and full sets of MUBs (mutually unbiased bases) in prime dimension. When represented as vectors in generalized Bloch space a SIC-POVM forms a d2-1 dimensional regular simplex (d being the Hilbert space dimension). By contrast, the generalized Bloch vectors representing a full set of MUBs form d+1 mutually orthogonal d-1 dimensional regular simplices. In this paper we show that, in the Weyl-Heisenberg case, there are some simple geometrical relationships between the single SIC-POVM simplex and the d+1 MUB simplices. We go on to give geometrical interpretations of the minimum uncertainty states introduced by Wootters and Sussman, and by Appleby, Dang and Fuchs, and of the fiduciality condition given by Appleby, Dang and Fuchs.
Regularized quasinormal modes for plasmonic resonators and open cavities
NASA Astrophysics Data System (ADS)
Kamandar Dezfouli, Mohsen; Hughes, Stephen
2018-03-01
Optical mode theory and analysis of open cavities and plasmonic particles is an essential component of optical resonator physics, offering considerable insight and efficiency for connecting to classical and quantum optical properties such as the Purcell effect. However, obtaining the dissipative modes in normalized form for arbitrarily shaped open-cavity systems is notoriously difficult, often involving complex spatial integrations, even after performing the necessary full space solutions to Maxwell's equations. The formal solutions are termed quasinormal modes, which are known to diverge in space, and additional techniques are frequently required to obtain more accurate field representations in the far field. In this work, we introduce a finite-difference time-domain technique that can be used to obtain normalized quasinormal modes using a simple dipole-excitation source, and an inverse Green function technique, in real frequency space, without having to perform any spatial integrations. Moreover, we show how these modes are naturally regularized to ensure the correct field decay behavior in the far field, and thus can be used at any position within and outside the resonator. We term these modes "regularized quasinormal modes" and show the reliability and generality of the theory by studying the generalized Purcell factor of dipole emitters near metallic nanoresonators, hybrid devices with metal nanoparticles coupled to dielectric waveguides, as well as coupled cavity-waveguides in photonic crystals slabs. We also directly compare our results with full-dipole simulations of Maxwell's equations without any approximations, and show excellent agreement.
[Cervical tinnitus treated by acupuncture based on "jin" theory: a clinical observation].
Dong, Youkang; Wang, Yi
2016-04-01
To compare the efficacy among acupuncture based on "jin" theory, regular acupuncture and western medication. A total of 95 cases, by using incomplete randomization method, were divided into a "jin" theory acupuncture group (32 cases), a regular acupuncture group (31 cases) and a medication group (32 cases). Patients in the "jin" theory acupuncture group were treated with acupuncture based on "jin" theory which included the "gather" and "knot" points on the affected side: positive reacted points, Fengchi (GB 20), Tianrong (SI 17), Tianyou (TE16) and Yiming (EX-HN14) as the main acupoints, while the Ermen (TE 21), Tinggong (SI 19) and Tinghui (GB 2) and zhigou (TE 6) as the auxiliary acpoints; the treatment was given once a day. Patients in the regular acupuncture group were treated with regular acupuncture at Tinggong (SI 19), Tin- ghui (GB 2) and Ermen (TE 21) and other matched acupoints based on syndrome differentiation, once a day. Pa- tients in the medication group were treated with oral administration of betahistine mesylate, three times a day. Ten days of treatment were taken as one session in three groups, and totally 2 sessions were given. Visual analogue scale (VAS), tinnitus handicap inventory (THD), and tinnitus severity assessment scale (TSIS) were evaluated before and after treatment; also the clinical efficacy was compared among three groups. There are 5 drop-out cases du- ring the study. After the treatment, the VAS, THI and TSIS were improved in three groups (all P < 0.05); the VAS, THI and TSIS in the "jin" theory acupuncture group were lower than those in the regular acupuncture group and medication group (P < 0.05, P < 0.01). The total effective rate was 90.0% (27/30), 80.0% (24/30) and 63.3% (19/30), which was higher in the "jin" theory acupuncture group (P < 0.05, P < 0.01). The acupuncture based on "jin" theory is superior to regular acupuncture and western medication for cervical tinnitus.
Point-of-Purchase Price and Education Intervention to Reduce Consumption of Sugary Soft Drinks
Chandra, Amitabh; McManus, Katherine D.; Willett, Walter C.
2010-01-01
Objectives. We investigated whether a price increase on regular (sugary) soft drinks and an educational intervention would reduce their sales. Methods. We implemented a 5-phase intervention at the Brigham and Women's Hospital cafeteria in Boston, Massachusetts. After posting existing prices of regular and diet soft drinks and water during baseline, we imposed several interventions in series: a price increase of 35% on regular soft drinks, a reversion to baseline prices (washout), an educational campaign, and a combination price and educational period. We collected data from a comparison site, Beth Israel Deaconess Hospital, also in Boston, for the final 3 phases. Results. Sales of regular soft drinks declined by 26% during the price increase phase. This reduction in sales persisted throughout the study period, with an additional decline of 18% during the combination phase compared with the washout period. Education had no independent effect on sales. Analysis of the comparison site showed no change in regular soft drink sales during the study period. Conclusions. A price increase may be an effective policy mechanism to decrease sales of regular soda. Further multisite studies in varied populations are warranted to confirm these results. PMID:20558801
Point-of-purchase price and education intervention to reduce consumption of sugary soft drinks.
Block, Jason P; Chandra, Amitabh; McManus, Katherine D; Willett, Walter C
2010-08-01
We investigated whether a price increase on regular (sugary) soft drinks and an educational intervention would reduce their sales. We implemented a 5-phase intervention at the Brigham and Women's Hospital cafeteria in Boston, Massachusetts. After posting existing prices of regular and diet soft drinks and water during baseline, we imposed several interventions in series: a price increase of 35% on regular soft drinks, a reversion to baseline prices (washout), an educational campaign, and a combination price and educational period. We collected data from a comparison site, Beth Israel Deaconess Hospital, also in Boston, for the final 3 phases. Sales of regular soft drinks declined by 26% during the price increase phase. This reduction in sales persisted throughout the study period, with an additional decline of 18% during the combination phase compared with the washout period. Education had no independent effect on sales. Analysis of the comparison site showed no change in regular soft drink sales during the study period. A price increase may be an effective policy mechanism to decrease sales of regular soda. Further multisite studies in varied populations are warranted to confirm these results.
X-Ray Phase Imaging for Breast Cancer Detection
2010-09-01
regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the
NASA Astrophysics Data System (ADS)
Reimond, S.; Klinger, B.; Krauss, S.; Mayer-Gürr, T.; Eicker, A.; Zemp, M.
2017-12-01
In recent years, remotely sensed observations have become one of the most ubiquitous and valuable sources of information for glacier monitoring. In addition to altimetry and interferometry data (as observed, e.g., by the CryoSat-2 and TanDEM-X satellites), time-variable gravity field data from the GRACE satellite mission has been used by several authors to assess mass changes in glacier systems. The main challenges in this context are i) the limited spatial resolution of GRACE, ii) the gravity signal attenuation in space and iii) the problem of isolating the glaciological signal from the gravitational signatures as detected by GRACE.In order to tackle the challenges i) and ii), we thoroughly investigate the point-mass modeling technique to represent the local gravity field. Instead of simply evaluating global spherical harmonics, we operate on the normal equation level and make use of GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology. Assessing such small-scale mass changes from space-borne gravimetric data is an ill-posed problem, which we aim to stabilize by utilizing a Genetic Algorithm based Tikhonov regularization. Concerning issue iii), we evaluate three different hydrology models (i.e. GLDAS, LSDM and WGHM) for validation purposes and the derivation of error bounds. The non-glaciological signal is calculated for each region of interest and reduced from the GRACE results.We present mass variations of several alpine glacier systems (e.g. the European Alps, Svalbard or Iceland) and compare our results to glaciological observations provided by the World Glacier Monitoring Service (WGMS) and alternative inversion methods (surface density modeling).
Lima, Tiago; Carvalho, Ágata; Carvalho, Vasco
2012-01-01
ABSTRACT Objectives The aim of this study was to assess the clinical outcomes achieved with Computer-Assisted Design/Computer-Assisted Manufacturing implant abutments in the anterior maxilla. Material and Methods Nineteen patients with a mean age of 41 (range form 26 to 63) years, treated with 21 single tooth implants and 21 Computer-Assisted Design/Computer-Assisted Manufacturing (CAD/CAM) abutments in the anterior maxillary region were included in this study. The patients followed 4 criteria of inclusion: (1) had a single-tooth implant in the anterior maxilla, (2) had a CAD/CAM abutment, (3) had a contralateral natural tooth, (4) the implant was restored and in function for at least 6 months up to 2 years. Cases without contact point were excluded. Presence/absence of the interproximal papilla, inter tooth-implant distance (ITD) and distance from the base of the contact point to dental crest bone of adjacent tooth (CPB) were accessed. Results Forty interproximal spaces were evaluated, with an average mesial CPB of 5.65 (SD 1.65) mm and distal CPB of 4.65 (SD 1.98) mm. An average mesial ITD of 2.49 (SD 0.69) mm and an average distal ITD of 1.89 (SD 0.63) mm were achieved. Papilla was present in all the interproximal spaces accessed. Conclusions The restoration of dental implants using CAD/CAM abutments is a predictable treatment with improved aesthetic results. These type of abutments seem to help maintaining a regular papillary filling although the variations of the implant positioning or the restoration teeth relation. PMID:24422016
Partially chaotic orbits in a perturbed cubic force model
NASA Astrophysics Data System (ADS)
Muzzio, J. C.
2017-11-01
Three types of orbits are theoretically possible in autonomous Hamiltonian systems with 3 degrees of freedom: fully chaotic (they only obey the energy integral), partially chaotic (they obey an additional isolating integral besides energy) and regular (they obey two isolating integrals besides energy). The existence of partially chaotic orbits has been denied by several authors, however, arguing either that there is a sudden transition from regularity to full chaoticity or that a long enough follow-up of a supposedly partially chaotic orbit would reveal a fully chaotic nature. This situation needs clarification, because partially chaotic orbits might play a significant role in the process of chaotic diffusion. Here we use numerically computed Lyapunov exponents to explore the phase space of a perturbed three-dimensional cubic force toy model, and a generalization of the Poincaré maps to show that partially chaotic orbits are actually present in that model. They turn out to be double orbits joined by a bifurcation zone, which is the most likely source of their chaos, and they are encapsulated in regions of phase space bounded by regular orbits similar to each one of the components of the double orbit.
Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.
Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong
2011-09-01
Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.
78 FR 33968 - Establishment of Class E Airspace; Boca Grande, FL
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-06
... publication the FAA found that the heliport coordinates were incorrectly listed as point in space coordinates; and point in space coordinates were inadvertently omitted. This action makes the correction. Except.... Controlled airspace within a 6-mile radius of the point in space coordinates of the heliport is necessary for...
78 FR 33966 - Establishment of Class E Airspace; Pine Island, FL
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-06
... publication the FAA found that the heliport coordinates were incorrectly listed as point in space coordinates; and point in space coordinates were inadvertently omitted. This action makes the correction. Except.... Controlled airspace within a 6-mile radius of the point in space coordinates of the heliport is necessary for...
78 FR 32553 - Establishment of Class E Airspace; Boothbay, ME
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-31
... the FAA found that the points of space coordinates were incorrect. This action makes the correction... Heliport. Controlled airspace within a 6-mile radius of the point in space coordinates of the heliport is... heliport and point in space are corrected and separately listed. The FAA has determined that this...
Dynamic analysis of suspension cable based on vector form intrinsic finite element method
NASA Astrophysics Data System (ADS)
Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun
2017-10-01
A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.
Peculiar velocity effect on galaxy correlation functions in nonlinear clustering regime
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
1994-03-01
We studied the distortion of the apparent distribution of galaxies in redshift space contaminated by the peculiar velocity effect. Specifically we obtained the expressions for N-point correlation functions in redshift space with given functional form for velocity distribution f(v) and evaluated two- and three-point correlation functions quantitatively. The effect of velocity correlations is also discussed. When the two-point correlation function in real space has a power-law form, Xir(r) is proportional to r(-gamma), the redshift-space counterpart on small scales also has a power-law form but with an increased power-law index: Xis(s) is proportional to s(1-gamma). When the three-point correlation function has the hierarchical form and the two-point correlation function has the power-law form in real space, the hierarchical form of the three-point correlation function is almost preserved in redshift space. The above analytic results are compared with the direct analysis based on N-body simulation data for cold dark matter models. Implications on the hierarchical clustering ansatz are discussed in detail.
A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays
Hughes, Alec; Hynynen, Kullervo
2016-01-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323
A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.
Hughes, Alec; Hynynen, Kullervo
2016-12-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.
A Revision on Classical Solutions to the Cauchy Boltzmann Problem for Soft Potentials
NASA Astrophysics Data System (ADS)
Alonso, Ricardo J.; Gamba, Irene M.
2011-05-01
This short note complements the recent paper of the authors (Alonso, Gamba in J. Stat. Phys. 137(5-6):1147-1165, 2009). We revisit the results on propagation of regularity and stability using L p estimates for the gain and loss collision operators which had the exponent range misstated for the loss operator. We show here the correct range of exponents. We require a Lebesgue's exponent α>1 in the angular part of the collision kernel in order to obtain finiteness in some constants involved in the regularity and stability estimates. As a consequence the L p regularity associated to the Cauchy problem of the space inhomogeneous Boltzmann equation holds for a finite range of p≥1 explicitly determined.
Zhang, Zhonghao; Xiao, Rui; Shortridge, Ashton; Wu, Jiaping
2014-01-01
Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS) approach, Ripley’s K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect) on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning. PMID:24619117
Quantum mechanics on Laakso spaces
NASA Astrophysics Data System (ADS)
Kauffman, Christopher J.; Kesler, Robert M.; Parshall, Amanda G.; Stamey, Evelyn A.; Steinhurst, Benjamin A.
2012-04-01
We first review the spectrum of the Laplacian operator on a general Laakso space before considering modified Hamiltonians for the infinite square well, parabola, and Coulomb potentials. Additionally, we compute the spectrum for the Laplacian and its multiplicities when certain regions of a Laakso space are compressed or stretched and calculate the Casimir force experienced by two uncharged conducting plates by imposing physically relevant boundary conditions and then analytically regularizing the resulting zeta function. Lastly, we derive a general formula for the spectral zeta function and its derivative for Laakso spaces with strict self-similar structure before listing explicit spectral values for some special cases
NASA Astrophysics Data System (ADS)
Hoeksema, J. T.; Baldner, C. S.; Bush, R. I.; Schou, J.; Scherrer, P. H.
2018-03-01
The Helioseismic and Magnetic Imager (HMI) instrument is a major component of NASA's Solar Dynamics Observatory (SDO) spacecraft. Since commencement of full regular science operations on 1 May 2010, HMI has operated with remarkable continuity, e.g. during the more than five years of the SDO prime mission that ended 30 September 2015, HMI collected 98.4% of all possible 45-second velocity maps; minimizing gaps in these full-disk Dopplergrams is crucial for helioseismology. HMI velocity, intensity, and magnetic-field measurements are used in numerous investigations, so understanding the quality of the data is important. This article describes the calibration measurements used to track the performance of the HMI instrument, and it details trends in important instrument parameters during the prime mission. Regular calibration sequences provide information used to improve and update the calibration of HMI data. The set-point temperature of the instrument front window and optical bench is adjusted regularly to maintain instrument focus, and changes in the temperature-control scheme have been made to improve stability in the observable quantities. The exposure time has been changed to compensate for a 20% decrease in instrument throughput. Measurements of the performance of the shutter and tuning mechanisms show that they are aging as expected and continue to perform according to specification. Parameters of the tunable optical-filter elements are regularly adjusted to account for drifts in the central wavelength. Frequent measurements of changing CCD-camera characteristics, such as gain and flat field, are used to calibrate the observations. Infrequent expected events such as eclipses, transits, and spacecraft off-points interrupt regular instrument operations and provide the opportunity to perform additional calibration. Onboard instrument anomalies are rare and seem to occur quite uniformly in time. The instrument continues to perform very well.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
Fermion-number violation in regularizations that preserve fermion-number symmetry
NASA Astrophysics Data System (ADS)
Golterman, Maarten; Shamir, Yigal
2003-01-01
There exist both continuum and lattice regularizations of gauge theories with fermions which preserve chiral U(1) invariance (“fermion number”). Such regularizations necessarily break gauge invariance but, in a covariant gauge, one recovers gauge invariance to all orders in perturbation theory by including suitable counterterms. At the nonperturbative level, an apparent conflict then arises between the chiral U(1) symmetry of the regularized theory and the existence of ’t Hooft vertices in the renormalized theory. The only possible resolution of the paradox is that the chiral U(1) symmetry is broken spontaneously in the enlarged Hilbert space of the covariantly gauge-fixed theory. The corresponding Goldstone pole is unphysical. The theory must therefore be defined by introducing a small fermion-mass term that breaks explicitly the chiral U(1) invariance and is sent to zero after the infinite-volume limit has been taken. Using this careful definition (and a lattice regularization) for the calculation of correlation functions in the one-instanton sector, we show that the ’t Hooft vertices are recovered as expected.
Manifold regularized multitask learning for semi-supervised multilabel image classification.
Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J
2013-02-01
It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.
NASA Astrophysics Data System (ADS)
Cheng, C. H. Arthur; Shkoller, Steve
2017-09-01
We provide a self-contained proof of the solvability and regularity of a Hodge-type elliptic system, wherein the divergence and curl of a vector field u are prescribed in an open, bounded, Sobolev-class domain {Ω \\subseteq R^n}, and either the normal component {{u} \\cdot {N}} or the tangential components of the vector field {{u} × {N}} are prescribed on the boundary {partial Ω}. For {k > n/2}, we prove that u is in the Sobolev space {H^k+1(Ω)} if {Ω} is an {H^k+1}-domain, and the divergence, curl, and either the normal or tangential trace of u has sufficient regularity. The proof is based on a regularity theory for vector elliptic equations set on Sobolev-class domains and with Sobolev-class coefficients, and with a rather general set of Dirichlet and Neumann boundary conditions. The resulting regularity theory for the vector u is fundamental in the analysis of free-boundary and moving interface problems in fluid dynamics.
2015-03-22
Caption: ISS043E044174 (03/22/2015) --- Its haircut time onboard the International Space Station as Expedition 43 Commander and NASA astronaut Terry Virts handles the scissors while ESA (European Space Agency) astronaut Samantha Cristoforetti holds the vacuum to immediately pull the fine hair strands into the safe container so they don't float away into the station. Hair trims are a regular occurrence during an astronaut's six month tour.
Fixed Points of Contractive Mappings in b-Metric-Like Spaces
Hussain, Nawab; Roshan, Jamal Rezaei
2014-01-01
We discuss topological structure of b-metric-like spaces and demonstrate a fundamental lemma for the convergence of sequences. As an application we prove certain fixed point results in the setup of such spaces for different types of contractive mappings. Finally, some periodic point results in b-metric-like spaces are obtained. Two examples are presented in order to verify the effectiveness and applicability of our main results. PMID:25143980
Expandable pallet for space station interface attachments
NASA Technical Reports Server (NTRS)
Wesselski, Clarence J. (Inventor)
1988-01-01
Described is a foldable expandable pallet for Space Station interface attachments with a basic square configuration. Each pallet consists of a series of struts joined together by node point fittings to make a rigid structure. The struts have hinge fittings which are spring loaded to permit collapse of the module for stowage transport to a Space Station in the payload bay of the Space Shuttle, and development on orbit. Dimensions of the pallet are selected to provide convenient, closely spaced attachment points between the node points of the relatively widely spaced trusses of a Space Station platform. A pallet is attached to a strut at four points: one close fitting hole, two oversize holes, and a slot to allow for thermal expansion/contraction and for manufacturing tolerances. Applications of the pallet include its use in rotary or angular joints; servicing of splints; with gridded plates; as instrument mounting bases; and as a roadbed for a Mobile Service Center (MSC).
A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation
NASA Astrophysics Data System (ADS)
Smith, David J.
2018-04-01
The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.
Patterns of rare and abundant marine microbial eukaryotes.
Logares, Ramiro; Audic, Stéphane; Bass, David; Bittner, Lucie; Boutte, Christophe; Christen, Richard; Claverie, Jean-Michel; Decelle, Johan; Dolan, John R; Dunthorn, Micah; Edvardsen, Bente; Gobet, Angélique; Kooistra, Wiebe H C F; Mahé, Frédéric; Not, Fabrice; Ogata, Hiroyuki; Pawlowski, Jan; Pernice, Massimo C; Romac, Sarah; Shalchian-Tabrizi, Kamran; Simon, Nathalie; Stoeck, Thorsten; Santini, Sébastien; Siano, Raffaele; Wincker, Patrick; Zingone, Adriana; Richards, Thomas A; de Vargas, Colomban; Massana, Ramon
2014-04-14
Biological communities are normally composed of a few abundant and many rare species. This pattern is particularly prominent in microbial communities, in which most constituent taxa are usually extremely rare. Although abundant and rare subcommunities may present intrinsic characteristics that could be crucial for understanding community dynamics and ecosystem functioning, microbiologists normally do not differentiate between them. Here, we investigate abundant and rare subcommunities of marine microbial eukaryotes, a crucial group of organisms that remains among the least-explored biodiversity components of the biosphere. We surveyed surface waters of six separate coastal locations in Europe, independently considering the picoplankton, nanoplankton, and microplankton/mesoplankton organismal size fractions. Deep Illumina sequencing of the 18S rRNA indicated that the abundant regional community was mostly structured by organismal size fraction, whereas the rare regional community was mainly structured by geographic origin. However, some abundant and rare taxa presented similar biogeography, pointing to spatiotemporal structure in the rare microeukaryote biosphere. Abundant and rare subcommunities presented regular proportions across samples, indicating similar species-abundance distributions despite taxonomic compositional variation. Several taxa were abundant in one location and rare in other locations, suggesting large oscillations in abundance. The substantial amount of metabolically active lineages found in the rare biosphere suggests that this subcommunity constitutes a diversity reservoir that can respond rapidly to environmental change. We propose that marine planktonic microeukaryote assemblages incorporate dynamic and metabolically active abundant and rare subcommunities, with contrasting structuring patterns but fairly regular proportions, across space and time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Point-to-point Commercial Space Transportation in the National Aviation System Final Report.
DOT National Transportation Integrated Search
2010-03-10
The advent of suborbital transport brings promise of point-to-point (PTP) long distance transportation as a revolutionary mode of air transportation. In 2008, the International Space University (ISU) of Strasbourg, France, published a report1 documen...
Latif, Abdul; Mongkolkeha, Chirasak; Sintunavarat, Wutiphol
2014-01-01
We extend the notion of generalized weakly contraction mappings due to Choudhury et al. (2011) to generalized α-β-weakly contraction mappings. We show with examples that our new class of mappings is a real generalization of several known classes of mappings. We also establish fixed point results for such mappings in metric spaces. Applying our new results, we obtain fixed point results on ordinary metric spaces, metric spaces endowed with an arbitrary binary relation, and metric spaces endowed with graph.
Quantum field theory in spaces with closed time-like curves
NASA Astrophysics Data System (ADS)
Boulware, D. G.
Gott spacetime has closed timelike curves, but no locally anomalous stress-energy. A complete orthonormal set of eigenfunctions of the wave operator is found in the special case of a spacetime in which the total deficit angle is 27(pi). A scalar quantum field theory is constructed using these eigenfunctions. The resultant interacting quantum field theory is not unitary because the field operators can create real, on-shell, particles in the acausal region. These particles propagate for finite proper time accumulating an arbitrary phase before being annihilated at the same spacetime point as that at which they were created. As a result, the effective potential within the acausal region is complex, and probability is not conserved. The stress tensor of the scalar field is evaluated in the neighborhood of the Cauchy horizon; in the case of a sufficiently small Compton wavelength of the field, the stress tensor is regular and cannot prevent the formation of the Cauchy horizon.
Phase Contrast Wavefront Sensing for Adaptive Optics
NASA Technical Reports Server (NTRS)
Bloemhof, E. E.; Wallace, J. K.; Bloemhof, E. E.
2004-01-01
Most ground-based adaptive optics systems use one of a small number of wavefront sensor technologies, notably (for relatively high-order systems) the Shack-Hartmann sensor, which provides local measurements of the phase slope (first-derivative) at a number of regularly-spaced points across the telescope pupil. The curvature sensor, with response proportional to the second derivative of the phase, is also sometimes used, but has undesirable noise propagation properties during wavefront reconstruction as the number of actuators becomes large. It is interesting to consider the use for astronomical adaptive optics of the "phase contrast" technique, originally developed for microscopy by Zemike to allow convenient viewing of phase objects. In this technique, the wavefront sensor provides a direct measurement of the local value of phase in each sub-aperture of the pupil. This approach has some obvious disadvantages compared to Shack-Hartmann wavefront sensing, but has some less obvious but substantial advantages as well. Here we evaluate the relative merits in a practical ground-based adaptive optics system.
A triaxial supramolecular weave
NASA Astrophysics Data System (ADS)
Lewandowska, Urszula; Zajaczkowski, Wojciech; Corra, Stefano; Tanabe, Junki; Borrmann, Ruediger; Benetti, Edmondo M.; Stappert, Sebastian; Watanabe, Kohei; Ochs, Nellie A. K.; Schaeublin, Robin; Li, Chen; Yashima, Eiji; Pisula, Wojciech; Müllen, Klaus; Wennemers, Helma
2017-11-01
Despite recent advances in the synthesis of increasingly complex topologies at the molecular level, nano- and microscopic weaves have remained difficult to achieve. Only a few diaxial molecular weaves exist—these were achieved by templation with metals. Here, we present an extended triaxial supramolecular weave that consists of self-assembled organic threads. Each thread is formed by the self-assembly of a building block comprising a rigid oligoproline segment with two perylene-monoimide chromophores spaced at 18 Å. Upon π stacking of the chromophores, threads form that feature alternating up- and down-facing voids at regular distances. These voids accommodate incoming building blocks and establish crossing points through CH-π interactions on further assembly of the threads into a triaxial woven superstructure. The resulting micrometre-scale supramolecular weave proved to be more robust than non-woven self-assemblies of the same building block. The uniform hexagonal pores of the interwoven network were able to host iridium nanoparticles, which may be of interest for practical applications.
Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation
NASA Technical Reports Server (NTRS)
Liandrat, J.; Tchamitchian, PH.
1990-01-01
The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.
Synchronizing noisy nonidentical oscillators by transient uncoupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandon, Aditya, E-mail: adityat@iitk.ac.in; Mannattil, Manu, E-mail: mmanu@iitk.ac.in; Schröder, Malte, E-mail: malte@nld.ds.mpg.de
2016-09-15
Synchronization is the process of achieving identical dynamics among coupled identical units. If the units are different from each other, their dynamics cannot become identical; yet, after transients, there may emerge a functional relationship between them—a phenomenon termed “generalized synchronization.” Here, we show that the concept of transient uncoupling, recently introduced for synchronizing identical units, also supports generalized synchronization among nonidentical chaotic units. Generalized synchronization can be achieved by transient uncoupling even when it is impossible by regular coupling. We furthermore demonstrate that transient uncoupling stabilizes synchronization in the presence of common noise. Transient uncoupling works best if the unitsmore » stay uncoupled whenever the driven orbit visits regions that are locally diverging in its phase space. Thus, to select a favorable uncoupling region, we propose an intuitive method that measures the local divergence at the phase points of the driven unit's trajectory by linearizing the flow and subsequently suppresses the divergence by uncoupling.« less
M Dwarf Flares: Exoplanet Detection Implications
NASA Astrophysics Data System (ADS)
Tofflemire, B. M.; Wisniewski, J. P.; Hilton, E. J.; Kowalski, A. F.; Kundurthy, P.; Schmidt, S. J.; Hawley, S. L.; Holtzman, J. A.
2011-12-01
Low mass stars such as M dwarfs have become prime targets for exoplanet transit searches as their low luminosities and small stellar radii could enable the detection of super-Earths residing in their habitable zones. While promising transit targets, M dwarfs are also inherently variable and can exhibit up to ˜6 magnitude flux enhancements in the optical U-band. This is significantly higher than the predicted transit depths of habitable zone super-Earths (0.005 magnitude flux decrease). The behavior of flares at infrared (IR) wavelengths, particularly those likely to be used to study and characterize M dwarf exoplanets using facilities such as the James Web Space Telescope (JWST), remains largely unknown. To address these uncertainties, we are executing a coordinated, contemporaneous monitoring program of the optical and IR flux of M dwarfs known to regularly flare. A suite of telescopes located at the Kitt Peak National Observatory and the Apache Point Observatory are used for the observations. We present the initial results of this program.
Dish layouts analysis method for concentrative solar power plant.
Xu, Jinshan; Gan, Shaocong; Li, Song; Ruan, Zhongyuan; Chen, Shengyong; Wang, Yong; Gui, Changgui; Wan, Bin
2016-01-01
Designs leading to maximize the use of sun radiation of a given reflective area without increasing the expense on investment are important to solar power plants construction. We here provide a method that allows one to compute shade area at any given time as well as the total shading effect of a day. By establishing a local coordinate system with the origin at the apex of a parabolic dish and z -axis pointing to the sun, neighboring dishes only with [Formula: see text] would shade onto the dish when in tracking mode. This procedure reduces the required computational resources, simplifies the calculation and allows a quick search for the optimum layout by considering all aspects leading to optimized arrangement: aspect ratio, shifting and rotation. Computer simulations done with information on dish Stirling system as well as DNI data released from NREL, show that regular-spacing is not an optimal layout, shifting and rotating column by certain amount can bring more benefits.
"Observation Obscurer" - Time Series Viewer, Editor and Processor
NASA Astrophysics Data System (ADS)
Andronov, I. L.
The program is described, which contains a set of subroutines suitable for East viewing and interactive filtering and processing of regularly and irregularly spaced time series. Being a 32-bit DOS application, it may be used as a default fast viewer/editor of time series in any compute shell ("commander") or in Windows. It allows to view the data in the "time" or "phase" mode, to remove ("obscure") or filter outstanding bad points; to make scale transformations and smoothing using few methods (e.g. mean with phase binning, determination of the statistically opti- mal number of phase bins; "running parabola" (Andronov, 1997, As. Ap. Suppl, 125, 207) fit and to make time series analysis using some methods, e.g. correlation, autocorrelation and histogram analysis: determination of extrema etc. Some features have been developed specially for variable star observers, e.g. the barycentric correction, the creation and fast analysis of "OC" diagrams etc. The manual for "hot keys" is presented. The computer code was compiled with a 32-bit Free Pascal (www.freepascal.org).
Compressed digital holography: from micro towards macro
NASA Astrophysics Data System (ADS)
Schretter, Colas; Bettens, Stijn; Blinder, David; Pesquet-Popescu, Béatrice; Cagnazzo, Marco; Dufaux, Frédéric; Schelkens, Peter
2016-09-01
signal processing methods from software-driven computer engineering and applied mathematics. The compressed sensing theory in particular established a practical framework for reconstructing the scene content using few linear combinations of complex measurements and a sparse prior for regularizing the solution. Compressed sensing found direct applications in digital holography for microscopy. Indeed, the wave propagation phenomenon in free space mixes in a natural way the spatial distribution of point sources from the 3-dimensional scene. As the 3-dimensional scene is mapped to a 2-dimensional hologram, the hologram samples form a compressed representation of the scene as well. This overview paper discusses contributions in the field of compressed digital holography at the micro scale. Then, an outreach on future extensions towards the real-size macro scale is discussed. Thanks to advances in sensor technologies, increasing computing power and the recent improvements in sparse digital signal processing, holographic modalities are on the verge of practical high-quality visualization at a macroscopic scale where much higher resolution holograms must be acquired and processed on the computer.
Elasticity and Stability of Clathrate Hydrate: Role of Guest Molecule Motions.
Jia, Jihui; Liang, Yunfeng; Tsuji, Takeshi; Murata, Sumihiko; Matsuoka, Toshifumi
2017-05-02
Molecular dynamic simulations were performed to determine the elastic constants of carbon dioxide (CO 2 ) and methane (CH 4 ) hydrates at one hundred pressure-temperature data points, respectively. The conditions represent marine sediments and permafrost zones where gas hydrates occur. The shear modulus and Young's modulus of the CO 2 hydrate increase anomalously with increasing temperature, whereas those of the CH 4 hydrate decrease regularly with increase in temperature. We ascribe this anomaly to the kinetic behavior of the linear CO 2 molecule, especially those in the small cages. The cavity space of the cage limits free rotational motion of the CO 2 molecule at low temperature. With increase in temperature, the CO 2 molecule can rotate easily, and enhance the stability and rigidity of the CO 2 hydrate. Our work provides a key database for the elastic properties of gas hydrates, and molecular insights into stability changes of CO 2 hydrate from high temperature of ~5 °C to low decomposition temperature of ~-150 °C.
Crystal Structure Variations of Sn Nanoparticles upon Heating
NASA Astrophysics Data System (ADS)
Mittal, Jagjiwan; Lin, Kwang-Lung
2018-04-01
Structural changes in Sn nanoparticles during heating below the melting point have been investigated using differential scanning calorimetry (DSC), x-ray diffraction (XRD) analysis, electron diffraction (ED), and high-resolution transmission electron microscopy (HRTEM). DSC revealed that the heat required to melt the nanoparticles (28.43 J/g) was about half compared with Sn metal (52.80 J/g), which was attributed to the large surface energy contribution for the nanoparticles. ED and XRD analyses of the Sn nanoparticles revealed increased intensity for crystal planes having large interplaner distances compared with regular crystal planes with increasing heat treatment temperature (HTT). HRTEM revealed an increase in interlayer spacing at the surface and near joints between nanoparticles with the HTT, leading to an amorphous structure of nanoparticles at the surface at 220°C. These results highlight the changes that occur in the morphology and crystal structure of Sn nanoparticles at the surface and in the interior with increase of the heat treatment temperature.
Anti-pointing is mediated by a perceptual bias of target location in left and right visual space.
Heath, Matthew; Maraj, Anika; Gradkowski, Ashlee; Binsted, Gordon
2009-01-01
We sought to determine whether mirror-symmetrical limb movements (so-called anti-pointing) elicit a pattern of endpoint bias commensurate with perceptual judgments. In particular, we examined whether asymmetries related to the perceptual over- and under-estimation of target extent in respective left and right visual space impacts the trajectories of anti-pointing. In Experiment 1, participants completed direct (i.e. pro-pointing) and mirror-symmetrical (i.e. anti-pointing) responses to targets in left and right visual space with their right hand. In line with the anti-saccade literature, anti-pointing yielded longer reaction times than pro-pointing: a result suggesting increased top-down processing for the sensorimotor transformations underlying a mirror-symmetrical response. Most interestingly, pro-pointing yielded comparable endpoint accuracy in left and right visual space; however, anti-pointing produced an under- and overshooting bias in respective left and right visual space. In Experiment 2, we replicated the findings from Experiment 1 and further demonstrate that the endpoint bias of anti-pointing is independent of the reaching limb (i.e. left vs. right hand) and between-task differences in saccadic drive. We thus propose that the visual field-specific endpoint bias observed here is related to the cognitive (i.e. top-down) nature of anti-pointing and the corollary use of visuo-perceptual networks to support the sensorimotor transformations underlying such actions.
NASA Astrophysics Data System (ADS)
Skala, Vaclav
2016-06-01
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.
30 CFR 75.1433 - Examinations.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., and at change-of-layer regions. When any visible condition that results in a reduction of rope... regular stopping points; and (4) At drum crossover and change-of-layer regions. (d) At the completion of...
Sombrero Galaxy Not So Flat After All
2012-04-24
New observations from NASA Spitzer Space Telescope reveal the Sombrero galaxy is not simply a regular flat disk galaxy of stars as previously believed, but a more round elliptical galaxy with a flat disk tucked inside.
NASA Astrophysics Data System (ADS)
Chang, Der-Chen; Markina, Irina; Wang, Wei
2016-09-01
The k-Cauchy-Fueter operator D0(k) on one dimensional quaternionic space H is the Euclidean version of spin k / 2 massless field operator on the Minkowski space in physics. The k-Cauchy-Fueter equation for k ≥ 2 is overdetermined and its compatibility condition is given by the k-Cauchy-Fueter complex. In quaternionic analysis, these complexes play the role of Dolbeault complex in several complex variables. We prove that a natural boundary value problem associated to this complex is regular. Then by using the theory of regular boundary value problems, we show the Hodge-type orthogonal decomposition, and the fact that the non-homogeneous k-Cauchy-Fueter equation D0(k) u = f on a smooth domain Ω in H is solvable if and only if f satisfies the compatibility condition and is orthogonal to the set ℋ(k)1 (Ω) of Hodge-type elements. This set is isomorphic to the first cohomology group of the k-Cauchy-Fueter complex over Ω, which is finite dimensional, while the second cohomology group is always trivial.
Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions
NASA Astrophysics Data System (ADS)
Moore, Peter K.
2007-06-01
In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.
Raspanti, M; Congiu, T; Alessandrini, A; Gobbi, P; Ruggeri, A
2000-01-01
The extracellular matrix of unfixed, unstained rat corneal stroma, visualized with high-resolution scanning electron microscopy and atomic force microscopy after minimal preliminary treatment, appears composed of straight, parallel, uniform collagen fibrils regularly spaced by a three-dimensional, irregular network of thin, delicate proteoglycan filaments. Rat tail tendon, observed under identical conditions, appears instead made of heterogeneous, closely packed fibrils interwoven with orthogonal proteoglycan filaments. Pre-treatment with cupromeronic blue just thickens the filaments without affecting their spatial layout. Digestion with chondroitinase ABC rids the tendon matrix of all its interconnecting filaments while the corneal stroma architecture remains virtually unaffected, its fibrils always being separated by an evident interfibrillar spacing which is never observed in tendon. Our observations indicate that matrix proteoglycans are responsible for both the highly regular interfibrillar spacing which is distinctive of corneal stroma, and the strong interfibrillar binding observed in tendon. These opposite interaction patterns appear to be distinctive of different proteoglycan species. The molecular details of proteoglycan interactions are still incompletely understood and are the subject of ongoing research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umansky, M. V.; Ryutov, D. D.
Reduced MHD equations are used for studying toroidally symmetric plasma dynamics near the divertor null point. Numerical solution of these equations exhibits a plasma vortex localized at the null point with the time-evolution defined by interplay of the curvature drive, magnetic restoring force, and dissipation. Convective motion is easier to achieve for a second-order null (snowflake) divertor than for a regular x-point configuration, and the size of the convection zone in a snowflake configuration grows with plasma pressure at the null point. In conclusion, the trends in simulations are consistent with tokamak experiments which indicate the presence of enhanced transportmore » at the null point.« less
Apparently noninvariant terms of nonlinear sigma models in lattice perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harada, Koji; Hattori, Nozomu; Kubo, Hirofumi
2009-03-15
Apparently noninvariant terms (ANTs) that appear in loop diagrams for nonlinear sigma models are revisited in lattice perturbation theory. The calculations have been done mostly with dimensional regularization so far. In order to establish that the existence of ANTs is independent of the regularization scheme, and of the potential ambiguities in the definition of the Jacobian of the change of integration variables from group elements to 'pion' fields, we employ lattice regularization, in which everything (including the Jacobian) is well defined. We show explicitly that lattice perturbation theory produces ANTs in the four-point functions of the pion fields at one-loopmore » and the Jacobian does not play an important role in generating ANTs.« less
Toroidally symmetric plasma vortex at tokamak divertor null point
Umansky, M. V.; Ryutov, D. D.
2016-03-09
Reduced MHD equations are used for studying toroidally symmetric plasma dynamics near the divertor null point. Numerical solution of these equations exhibits a plasma vortex localized at the null point with the time-evolution defined by interplay of the curvature drive, magnetic restoring force, and dissipation. Convective motion is easier to achieve for a second-order null (snowflake) divertor than for a regular x-point configuration, and the size of the convection zone in a snowflake configuration grows with plasma pressure at the null point. In conclusion, the trends in simulations are consistent with tokamak experiments which indicate the presence of enhanced transportmore » at the null point.« less
Fine pointing control for free-space optical communication
NASA Technical Reports Server (NTRS)
Portillo, A. A.; Ortiz, G. G.; Racho, C.
2000-01-01
Free-Space Optical Communications requires precise, stable laser pointing to maintain operating conditions. This paper also describes the software and hardware implementation of Fine Pointing Control based on the Optical Communications Demonstrator architecture.
NASA Astrophysics Data System (ADS)
Scala, Antonio; Festa, Gaetano; Vilotte, Jean-Pierre
2015-04-01
Faults are often interfaces between materials with different elastic properties. This is generally the case of plate boundaries in subduction zones, where the ruptures extend for many kilometers crossing materials with strong impedance contrasts (oceanic crust, continental crust, mantle wedge, accretionary prism). From a physical point of view, several peculiar features emerged both from analogic experiments and numerical simulations for a rupture propagating along a bimaterial interface. The elastodynamic flux at the rupture tip breaks its symmetry, inducing normal stress changes and an asymmetric propagation. This latter was widely shown for rupture velocity and slip rate (e.g. Xia et al, 2005) and was supposed to generate an asymmetric distribution of the aftershocks (Rubin and Ampuero, 2007). The bimaterial problem coupled with a Coulomb friction law is ill-posed for a wide range of impedance contrasts, due to a missing length scale in the instantaneous response to the normal traction changes. The ill-posedness also results into simulations no longer independent of the grid size. A regularization can be introduced by delaying the tangential traction from the normal traction as suggested by Cochard and Rice (2000) and Ranjith and Rice (2000) δσeff α|v|+-v* δt = δσ (σn - σeff) where σeff represents the effective normal stress to be used in the Coulomb friction. This regularization introduces two delays depending on the slip rate and on a fixed time scale. In this study we performed a large number of 2D numerical simulations of in plane rupture with the spectral element method dynamic and we systematically investigated the effect of parameter selection on the rupture propagation, dissipation and radiation, by also performing a direct comparison with solutions provided by numerical and experimental results. We found that a purely time-dependent regularization requires a fine tuning rapidly jumping from a too fast, ineffective delay to a slow, invasive, regularization as a function of the actual slip rate. Conversely, the choice of a fixed relaxation length, smaller than the critical slip weakening distance, provides a reliable class of solutions for a wide range of elastic and frictional parameters. Nevertheless critical rupture stages, such as the nucleation or the very fast steady-state propagation may show resolution problems and may take advantage of adaptive schemes, with a space/time variation of the parameters. We used recipes for bimaterial regularization to perform along-dip dynamic simulations of the Tohoku earthquake in the framework of a slip weakening model, with a realistic description of the geometry of the interface and the geological structure. We finely investigated the role of the impedance contrasts on the evolution of the rupture and short wavelength radiation. We also show that pathological effects may arise from a bad selection of regularization parameters.
A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications
NASA Astrophysics Data System (ADS)
Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.
2012-08-01
The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.
ARTEMIS: The First Mission to the Lunar Libration Orbits
NASA Technical Reports Server (NTRS)
Woodward, Mark; Folta, David; Woodfork, Dennis
2009-01-01
The ARTEMIS mission will be the first to navigate to and perform stationkeeping operations around the Earth-Moon L1 and L2 Lagrangian points. The NASA Goddard Space Flight Center (GSFC) has previous mission experience flying in the Sun-Earth L1 (SOHO, ACE, WIND, ISEE-3) and L2 regimes (WMAP) and have maintained these spacecraft in libration point orbits by performing regular orbit stationkeeping maneuvers. The ARTEMIS mission will build on these experiences, but stationkeeping in Earth-Moon libration orbits presents new challenges since the libration point orbit period is on the order of two weeks rather than six months. As a result, stationkeeping maneuvers to maintain the Lissajous orbit will need to be performed frequently, and the orbit determination solutions between maneuvers will need to be quite accurate. The ARTEMIS mission is a collaborative effort between NASA GSFC, the University of California at Berkeley (UCB), and the Jet Propulsion Laboratory (JPL). The ARTEMIS mission is part of the THEMIS extended mission. ARTEMIS comprises two of the five THEMIS spacecraft that will be maneuvered from near-Earth orbits into lunar libration orbits using a sequence of designed orbital maneuvers and Moon & Earth gravity assists. In July 2009, a series of orbit-raising maneuvers began the proper orbit phasing of the two spacecraft for the first lunar flybys. Over subsequent months, additional propulsive maneuvers and gravity assists will be performed to move each spacecraft though the Sun-Earth weak stability regions and eventually into Earth-Moon libration point orbits. We will present the overall orbit designs for the two ARTEMIS spacecraft and provide analysis results of the 3/4-body dynamics, and the sensitivities of the trajectory design to both · maneuver errors and orbit determination errors. We will present results from the. initial orbit-raising maneuvers.
Prevalence and Correlates of Having a Regular Physician among Women Presenting for Induced Abortion.
Chor, Julie; Hebert, Luciana E; Hasselbacher, Lee A; Whitaker, Amy K
2016-01-01
To determine the prevalence and correlates of having a regular physician among women presenting for induced abortion. We conducted a retrospective review of women presenting to an urban, university-based family planning clinic for abortion between January 2008 and September 2011. We conducted bivariate analyses, comparing women with and without a regular physician, and multivariable regression modeling, to identify factors associated with not having a regular physician. Of 834 women, 521 (62.5%) had a regular physician and 313 (37.5%) did not. Women with a prior pregnancy, live birth, or spontaneous abortion were more likely than women without these experiences to have a regular physician. Women with a prior induced abortion were not more likely than women who had never had a prior induced abortion to have a regular physician. Compared with women younger than 18 years, women aged 18 to 26 years were less likely to have a physician (adjusted odds ratio [aOR], 0.25; 95% confidence interval [CI], 0.10-0.62). Women with a prior live birth had increased odds of having a regular physician compared with women without a prior pregnancy (aOR, 1.89; 95% CI, 1.13-3.16). Women without medical/fetal indications and who had not been victims of sexual assault (self-indicated) were less likely to report having a regular physician compared with women with medical/fetal indications (aOR, 0.55; 95% CI, 0.17-0.82). The abortion visit is a point of contact with a large number of women without a regular physician and therefore provides an opportunity to integrate women into health care. Copyright © 2016 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.
A Consideration of HALO Type Orbit Designation and Maintaining for KUAFU-A and WSO/UV Missions
NASA Astrophysics Data System (ADS)
Nianchuan, J.; Xian, S.; Jianguo, Y.; Guangli, W.; Jingsong, P.
In the new era of deep space exploration more and more explorations at special places or points in solar system are carried out and planned There are five equilibrium points in the Sun-Earth system and the orbits around these points have good dynamic attribute Due to this reason The areas vicinity equilibrium points have many advantages for space exploration In recent 20 years the NASA and ESA have successfully launched several spacecrafts orbiting the Sun-Earth collinear equilibrium points Following the developing steps of space and deep space exploration in China Chinese scientists and engineers are considering and suggesting two equilibrium points explorations One is named KUAFU-A mission whose craft will orbit L1 point and the scientific target is studying the evolution of space weather of solar-terrestrial area The other is WSO UV mission whose craft will orbit L2 point and the scientific target is studying the structure and evolution of galaxies This report is mainly about HALO type orbit designation and maintaining for these two missions Following points are included 1 Briefly reviewing the explorations at the equilibrium points launched by NASA and ESA 2 Simply introducing the exploration KUAFU-A and WSO UV 3 Discussing the designation and maintaining of HALO type orbits in some detail for KUAFU-A and WSO UV
Space operations center: Shuttle interaction study extension, executive summary
NASA Technical Reports Server (NTRS)
1982-01-01
The Space Operations Center (SOC) is conceived as a permanent facility in low Earth orbit incorporating capabilities for space systems construction; space vehicle assembly, launching, recovery and servicing; and the servicing of co-orbiting satellites. The Shuttle Transportation System is an integral element of the SOC concept. It will transport the various elements of the SOC into space and support the assembly operation. Subsequently, it will regularly service the SOC with crew rotations, crew supplies, construction materials, construction equipment and components, space vehicle elements, and propellants and spare parts. The implications to the SOC as a consequence of the Shuttle supporting operations are analyzed. Programmatic influences associated with propellant deliveries, spacecraft servicing, and total shuttle flight operations are addressed.
NASA Astrophysics Data System (ADS)
Skøien, J. O.; Gottschalk, L.; Leblois, E.
2009-04-01
Whereas geostatistical and objective methods mostly have been developed for observations with point support or a regular support, e.g. runoff related data can be assumed to have an irregular support in space, and sometimes also a temporal support. The correlations between observations and between observations and the prediction location are found through an integration of a point variogram or point correlation function, a method known as regularisation. Being a relatively simple method for observations with equal and regular support, it can be computationally demanding if the observations have irregular support. With improved speed of computers, solving such integrations has become easier, but there can still be numerical problems that are not easily solved even with high-resolution computations. This can particularly be a problem in hydrological sciences where catchments are overlapping, the correlations are high, and small numerical errors can give ill-posed covariance matrices. The problem increases with increasing number of spatial and/or temporal dimensions. Gottschalk [1993a; 1993b] suggested to replace the integration by a Taylor expansion, hence reducing the computation time considerably, and also expecting less numerical problems with the covariance matrices. In practice, the integrated correlation/semivariance between observations are replaced by correlations/semivariances using the so called Ghosh-distance. Although Gottschalk and collaborators have used the Ghosh-distance also in other papers [Sauquet, et al., 2000a; Sauquet, et al., 2000b], the properties of the simplification have not been examined in detail. Hence, we will here analyse the replacement of the integration by the use of Ghosh-distances, both in sense of the ability to reproduce regularised semivariogram and correlation values, and the influence on the final interpolated maps. Comparisons will be performed both for real observations with a support (hydrological data) and for more hypothetical observations with regular supports where analytical expressions for the regularised semivariances/correlations in some cases can be derived. The results indicate that the simplification is useful for spatial interpolation when the support of the observations has to be taken into account. The difference in semivariogram value or correlation value between the simplified method and the full integration is limited on short distances, increasing for larger distances. However, this is to some degree taken into account while fitting a model for the point process, so that the results after interpolation are less affected by the simplification. The method is of particular use if computation time is of importance, e.g. in the case of real-time mapping procedures. Gottschalk, L. (1993a) Correlation and covariance of runoff, Stochastic Hydrology and Hydraulics, 7, 85-101. Gottschalk, L. (1993b) Interpolation of runoff applying objective methods, Stochastic Hydrology and Hydraulics, 7, 269-281. Sauquet, E., L. Gottschalk, and E. Leblois (2000a) Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation scheme, Hydrological Sciences Journal, 45, 799-815. Sauquet, E., I. Krasovskaia, and E. Leblois (2000b) Mapping mean monthly runoff pattern using EOF analysis, Hydrology and Earth System Sciences, 4, 79-93.
EVA assembly of large space structure element
NASA Technical Reports Server (NTRS)
Bement, L. J.; Bush, H. G.; Heard, W. L., Jr.; Stokes, J. W., Jr.
1981-01-01
The results of a test program to assess the potential of manned extravehicular activity (EVA) assembly of erectable space trusses are described. Seventeen tests were conducted in which six "space-weight" columns were assembled into a regular tetrahedral cell by a team of two "space"-suited test subjects. This cell represents the fundamental "element" of a tetrahedral truss structure. The tests were conducted under simulated zero-gravity conditions. Both manual and simulated remote manipulator system modes were evaluated. Articulation limits of the pressure suit and zero gravity could be accommodated by work stations with foot restraints. The results of this study have confirmed that astronaut EVA assembly of large, erectable space structures is well within man's capabilities.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.