Sample records for projected two-point correlation

  1. Percolation analysis for cosmic web with discrete points

    NASA Astrophysics Data System (ADS)

    Zhang, Jiajun; Cheng, Dalong; Chu, Ming-Chung

    2016-03-01

    Percolation analysis has long been used to quantify the connectivity of the cosmic web. Unlike most of the previous works using density field on grids, we have studied percolation analysis based on discrete points. Using a Friends-of-Friends (FoF) algorithm, we generate the S-bb relation, between the fractional mass of the largest connected group (S) and the FoF linking length (bb). We propose a new model, the Probability Cloud Cluster Expansion Theory (PCCET) to relate the S-bb relation with correlation functions. We show that the S-bb relation reflects a combination of all orders of correlation functions. We have studied the S-bb relation with simulation and find that the S-bb relation is robust against redshift distortion and incompleteness in observation. From the Bolshoi simulation, with Halo Abundance Matching (HAM), we have generated a mock galaxy catalogue. Good matching of the projected two-point correlation function with observation is confirmed. However, comparing the mock catalogue with the latest galaxy catalogue from SDSS DR12, we have found significant differences in their S-bb relations. This indicates that the mock catalogue cannot accurately recover higher order correlation functions than the two-point correlation function, which reveals the limit of HAM method.

  2. Spin correlations in quantum wires

    NASA Astrophysics Data System (ADS)

    Sun, Chen; Pokrovsky, Valery L.

    2015-04-01

    We consider theoretically spin correlations in a one-dimensional quantum wire with Rashba-Dresselhaus spin-orbit interaction (RDI). The correlations of noninteracting electrons display electron spin resonance at a frequency proportional to the RDI coupling. Interacting electrons, upon varying the direction of the external magnetic field, transit from the state of Luttinger liquid (LL) to the spin-density wave (SDW) state. We show that the two-time total-spin correlations of these states are significantly different. In the LL, the projection of total spin to the direction of the RDI-induced field is conserved and the corresponding correlator is equal to zero. The correlators of two components perpendicular to the RDI field display a sharp electron-spin resonance driven by the RDI-induced intrinsic field. In contrast, in the SDW state, the longitudinal projection of spin dominates, whereas the transverse components are suppressed. This prediction indicates a simple way for an experimental diagnostic of the SDW in a quantum wire. We point out that the Luttinger model does not respect the spin conservation since it assumes the infinite Fermi sea. We propose a proper cutoff to correct this failure.

  3. Using galaxy pairs to investigate the three-point correlation function in the squeezed limit

    NASA Astrophysics Data System (ADS)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2017-11-01

    We investigate the three-point correlation function (3PCF) in the squeezed limit by considering galaxy pairs as discrete objects and cross-correlating them with the galaxy field. We develop an efficient algorithm using fast Fourier transforms to compute such cross-correlations and their associated pair-galaxy bias bp, g and the squeezed 3PCF coefficient Qeff. We implement our method using N-body cosmological simulations and a fiducial halo occupation distribution (HOD) and present the results in both the real space and redshift space. In real space, we observe a peak in bp, g and Qeff at pair separation of ∼2 Mpc, attributed to the fact that galaxy pairs at 2 Mpc separation trace the most massive dark matter haloes. We also see strong anisotropy in the bp, g and Qeff signals that track the large-scale filamentary structure. In redshift space, both the 2 Mpc peak and the anisotropy are significantly smeared out along the line of sight due to finger-of-God effect. In both the real space and redshift space, the squeezed 3PCF shows a factor of 2 variation, contradicting the hierarchical ansatz, but offering rich information on the galaxy-halo connection. Thus, we explore the possibility of using the squeezed 3PCF to constrain the HOD. When we compare two simple HOD models that are closely matched in their projected two-point correlation function (2PCF), we do not yet see a strong variation in the 3PCF that is clearly disentangled from variations in the projected 2PCF. Nevertheless, we propose that more complicated HOD models, e.g. those incorporating assembly bias, can break degeneracies in the 2PCF and show a distinguishable squeezed 3PCF signal.

  4. A Proposal for Research on Complex Media, Imagining and Uncertainty Quantification

    DTIC Science & Technology

    2013-11-26

    demonstration that the Green’s function for wave propagation in an ergodic cavity can be recovered exactly by cross correlation of signals at two points...the continuation of a project in which we have developed autofocus methods based on a phase space formulation ( Wigner transform) of the array data and

  5. Dark Energy Survey Year 1 Results: Methodology and Projections for Joint Analysis of Galaxy Clustering, Galaxy Lensing, and CMB Lensing Two-point Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giannantonio, T.; et al.

    Optical imaging surveys measure both the galaxy density and the gravitational lensing-induced shear fields across the sky. Recently, the Dark Energy Survey (DES) collaboration used a joint fit to two-point correlations between these observables to place tight constraints on cosmology (DES Collaboration et al. 2017). In this work, we develop the methodology to extend the DES Collaboration et al. (2017) analysis to include cross-correlations of the optical survey observables with gravitational lensing of the cosmic microwave background (CMB) as measured by the South Pole Telescope (SPT) and Planck. Using simulated analyses, we show how the resulting set of five two-pointmore » functions increases the robustness of the cosmological constraints to systematic errors in galaxy lensing shear calibration. Additionally, we show that contamination of the SPT+Planck CMB lensing map by the thermal Sunyaev-Zel'dovich effect is a potentially large source of systematic error for two-point function analyses, but show that it can be reduced to acceptable levels in our analysis by masking clusters of galaxies and imposing angular scale cuts on the two-point functions. The methodology developed here will be applied to the analysis of data from the DES, the SPT, and Planck in a companion work.« less

  6. Data-based diffraction kernels for surface waves from convolution and correlation processes through active seismic interferometry

    NASA Astrophysics Data System (ADS)

    Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc

    2018-05-01

    We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.

  7. swot: Super W Of Theta

    NASA Astrophysics Data System (ADS)

    Coupon, Jean; Leauthaud, Alexie; Kilbinger, Martin; Medezinski, Elinor

    2017-07-01

    SWOT (Super W Of Theta) computes two-point statistics for very large data sets, based on “divide and conquer” algorithms, mainly, but not limited to data storage in binary trees, approximation at large scale, parellelization (open MPI), and bootstrap and jackknife resampling methods “on the fly”. It currently supports projected and 3D galaxy auto and cross correlations, galaxy-galaxy lensing, and weighted histograms.

  8. Percolation analysis for cosmic web with discrete points

    NASA Astrophysics Data System (ADS)

    Zhang, Jiajun; Cheng, Dalong; Chu, Ming-Chung

    2018-01-01

    Percolation analysis has long been used to quantify the connectivity of the cosmic web. Most of the previous work is based on density fields on grids. By smoothing into fields, we lose information about galaxy properties like shape or luminosity. The lack of mathematical modeling also limits our understanding for the percolation analysis. To overcome these difficulties, we have studied percolation analysis based on discrete points. Using a friends-of-friends (FoF) algorithm, we generate the S -b b relation, between the fractional mass of the largest connected group (S ) and the FoF linking length (b b ). We propose a new model, the probability cloud cluster expansion theory to relate the S -b b relation with correlation functions. We show that the S -b b relation reflects a combination of all orders of correlation functions. Using N-body simulation, we find that the S -b b relation is robust against redshift distortion and incompleteness in observation. From the Bolshoi simulation, with halo abundance matching (HAM), we have generated a mock galaxy catalog. Good matching of the projected two-point correlation function with observation is confirmed. However, comparing the mock catalog with the latest galaxy catalog from Sloan Digital Sky Survey (SDSS) Data Release (DR)12, we have found significant differences in their S -b b relations. This indicates that the mock galaxy catalog cannot accurately retain higher-order correlation functions than the two-point correlation function, which reveals the limit of the HAM method. As a new measurement, the S -b b relation is applicable to a wide range of data types, fast to compute, and robust against redshift distortion and incompleteness and contains information of all orders of correlation functions.

  9. A Kinematically Consistent Two-Point Correlation Function

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.

    1998-01-01

    A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.

  10. Projection correlation between two random vectors.

    PubMed

    Zhu, Liping; Xu, Kai; Li, Runze; Zhong, Wei

    2017-12-01

    We propose the use of projection correlation to characterize dependence between two random vectors. Projection correlation has several appealing properties. It equals zero if and only if the two random vectors are independent, it is not sensitive to the dimensions of the two random vectors, it is invariant with respect to the group of orthogonal transformations, and its estimation is free of tuning parameters and does not require moment conditions on the random vectors. We show that the sample estimate of the projection correction is [Formula: see text]-consistent if the two random vectors are independent and root-[Formula: see text]-consistent otherwise. Monte Carlo simulation studies indicate that the projection correlation has higher power than the distance correlation and the ranks of distances in tests of independence, especially when the dimensions are relatively large or the moment conditions required by the distance correlation are violated.

  11. Theories and applications of second-order correlation of longitudinal velocity increments at three points in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Wu, J. Z.; Fang, L.; Shao, L.; Lu, L. P.

    2018-06-01

    In order to introduce new physics to traditional two-point correlations, we define the second-order correlation of longitudinal velocity increments at three points and obtain the analytical expressions in isotropic turbulence. By introducing the Kolmogorov 4/5 law, this three-point correlation explicitly contains velocity second- and third-order moments, which correspond to energy and energy transfer respectively. The combination of them then shows additional information of non-equilibrium turbulence by comparing to two-point correlations. Moreover, this three-point correlation shows the underlying inconsistency between numerical interpolation and three-point scaling law in numerical calculations, and inspires a preliminary model to correct this problem in isotropic turbulence.

  12. VizieR Online Data Catalog: SDSS Stripe 82 VLA 1-2GHz survey (Heywood+, 2016)

    NASA Astrophysics Data System (ADS)

    Heywood, I.; Jarvis, M. J.; Baker, A. J.; Bannister, K. W.; Carvalho, C. S.; Hardcastle, M.; Hilton, M.; Moodley, K.; Smirnov, O. M.; Smith, D. J. B.; White, S. V.; Wollack, E. J.

    2017-11-01

    The data (Project code: 13B-272) were taken with the array in the CnB configuration. Standard wide-band mode was employed with the correlator splitting the 1-2GHz of frequency coverage into 16 spectral windows (SPWs) with 64x1MHz channels each, and an integration time per visibility point of 3s. A total of 1368 target pointings were scheduled, 608 and 760 in the eastern and western regions, respectively, coincident with the two eastern and western areas of the existing Hodge et al. (2011, Cat. J/AJ/142/3) data. (2 data files).

  13. Wireless Channel Characterization: Modeling the 5 GHz Microwave Landing System Extension Band for Future Airport Surface Communications

    NASA Technical Reports Server (NTRS)

    Matolak, D. W.; Apaza, Rafael; Foore, Lawrence R.

    2006-01-01

    We describe a recently completed wideband wireless channel characterization project for the 5 GHz Microwave Landing System (MLS) extension band, for airport surface areas. This work included mobile measurements at large and small airports, and fixed point-to-point measurements. Mobile measurements were made via transmission from the air traffic control tower (ATCT), or from an airport field site (AFS), to a receiving ground vehicle on the airport surface. The point-to-point measurements were between ATCT and AFSs. Detailed statistical channel models were developed from all these measurements. Measured quantities include propagation path loss and power delay profiles, from which we obtain delay spreads, frequency domain correlation (coherence bandwidths), fading amplitude statistics, and channel parameter correlations. In this paper we review the project motivation, measurement coordination, and illustrate measurement results. Example channel modeling results for several propagation conditions are also provided, highlighting new findings.

  14. Peculiar velocity effect on galaxy correlation functions in nonlinear clustering regime

    NASA Astrophysics Data System (ADS)

    Matsubara, Takahiko

    1994-03-01

    We studied the distortion of the apparent distribution of galaxies in redshift space contaminated by the peculiar velocity effect. Specifically we obtained the expressions for N-point correlation functions in redshift space with given functional form for velocity distribution f(v) and evaluated two- and three-point correlation functions quantitatively. The effect of velocity correlations is also discussed. When the two-point correlation function in real space has a power-law form, Xir(r) is proportional to r(-gamma), the redshift-space counterpart on small scales also has a power-law form but with an increased power-law index: Xis(s) is proportional to s(1-gamma). When the three-point correlation function has the hierarchical form and the two-point correlation function has the power-law form in real space, the hierarchical form of the three-point correlation function is almost preserved in redshift space. The above analytic results are compared with the direct analysis based on N-body simulation data for cold dark matter models. Implications on the hierarchical clustering ansatz are discussed in detail.

  15. Discriminating topology in galaxy distributions using network analysis

    NASA Astrophysics Data System (ADS)

    Hong, Sungryong; Coutinho, Bruno C.; Dey, Arjun; Barabási, Albert-L.; Vogelsberger, Mark; Hernquist, Lars; Gebhardt, Karl

    2016-07-01

    The large-scale distribution of galaxies is generally analysed using the two-point correlation function. However, this statistic does not capture the topology of the distribution, and it is necessary to resort to higher order correlations to break degeneracies. We demonstrate that an alternate approach using network analysis can discriminate between topologically different distributions that have similar two-point correlations. We investigate two galaxy point distributions, one produced by a cosmological simulation and the other by a Lévy walk. For the cosmological simulation, we adopt the redshift z = 0.58 slice from Illustris and select galaxies with stellar masses greater than 108 M⊙. The two-point correlation function of these simulated galaxies follows a single power law, ξ(r) ˜ r-1.5. Then, we generate Lévy walks matching the correlation function and abundance with the simulated galaxies. We find that, while the two simulated galaxy point distributions have the same abundance and two-point correlation function, their spatial distributions are very different; most prominently, filamentary structures, absent in Lévy fractals. To quantify these missing topologies, we adopt network analysis tools and measure diameter, giant component, and transitivity from networks built by a conventional friends-of-friends recipe with various linking lengths. Unlike the abundance and two-point correlation function, these network quantities reveal a clear separation between the two simulated distributions; therefore, the galaxy distribution simulated by Illustris is not a Lévy fractal quantitatively. We find that the described network quantities offer an efficient tool for discriminating topologies and for comparing observed and theoretical distributions.

  16. Two-point correlation functions in inhomogeneous and anisotropic cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcori, Oton H.; Pereira, Thiago S., E-mail: otonhm@hotmail.com, E-mail: tspereira@uel.br

    Two-point correlation functions are ubiquitous tools of modern cosmology, appearing in disparate topics ranging from cosmological inflation to late-time astrophysics. When the background spacetime is maximally symmetric, invariance arguments can be used to fix the functional dependence of this function as the invariant distance between any two points. In this paper we introduce a novel formalism which fixes this functional dependence directly from the isometries of the background metric, thus allowing one to quickly assess the overall features of Gaussian correlators without resorting to the full machinery of perturbation theory. As an application we construct the CMB temperature correlation functionmore » in one inhomogeneous (namely, an off-center LTB model) and two spatially flat and anisotropic (Bianchi) universes, and derive their covariance matrices in the limit of almost Friedmannian symmetry. We show how the method can be extended to arbitrary N -point correlation functions and illustrate its use by constructing three-point correlation functions in some simple geometries.« less

  17. On two-point boundary correlations in the six-vertex model with domain wall boundary conditions

    NASA Astrophysics Data System (ADS)

    Colomo, F.; Pronko, A. G.

    2005-05-01

    The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.

  18. Analysis of the two-point velocity correlations in turbulent boundary layer flows

    NASA Technical Reports Server (NTRS)

    Oberlack, M.

    1995-01-01

    The general objective of the present work is to explore the use of Rapid Distortion Theory (RDT) in analysis of the two-point statistics of the log-layer. RDT is applicable only to unsteady flows where the non-linear turbulence-turbulence interaction can be neglected in comparison to linear turbulence-mean interactions. Here we propose to use RDT to examine the structure of the large energy-containing scales and their interaction with the mean flow in the log-region. The contents of the work are twofold: First, two-point analysis methods will be used to derive the law-of-the-wall for the special case of zero mean pressure gradient. The basic assumptions needed are one-dimensionality in the mean flow and homogeneity of the fluctuations. It will be shown that a formal solution of the two-point correlation equation can be obtained as a power series in the von Karman constant, known to be on the order of 0.4. In the second part, a detailed analysis of the two-point correlation function in the log-layer will be given. The fundamental set of equations and a functional relation for the two-point correlation function will be derived. An asymptotic expansion procedure will be used in the log-layer to match Kolmogorov's universal range and the one-point correlations to the inviscid outer region valid for large correlation distances.

  19. Statistical analysis of atmospheric turbulence about a simulated block building

    NASA Technical Reports Server (NTRS)

    Steely, S. L., Jr.

    1981-01-01

    An array of towers instrumented to measure the three components of wind speed was used to study atmospheric flow about a simulated block building. Two-point spacetime correlations of the longitudinal velocity component were computed along with two-point spatial correlations. These correlations are in good agreement with fundamental concepts of fluid mechanics. The two-point spatial correlations computed directly were compared with correlations predicted by Taylor's hypothesis and excellent agreement was obtained at the higher levels which were out of the building influence. The correlations fall off significantly in the building wake but recover beyond the wake to essentially the same values in the undisturbed, higher regions.

  20. Point-point and point-line moving-window correlation spectroscopy and its applications

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu

    2008-07-01

    In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.

  1. Separable projection integrals for higher-order correlators of the cosmic microwave sky: Acceleration by factors exceeding 100

    NASA Astrophysics Data System (ADS)

    Briggs, J. P.; Pennycook, S. J.; Fergusson, J. R.; Jäykkä, J.; Shellard, E. P. S.

    2016-04-01

    We present a case study describing efforts to optimise and modernise "Modal", the simulation and analysis pipeline used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum (or three-point correlator) of the cosmic microwave background radiation. We focus on one particular element of the code: the projection of bispectra from the end of inflation to the spherical shell at decoupling, which defines the CMB we observe today. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular domain containing a sparse grid. We show that by employing separable methods this calculation can be reduced to a one-dimensional summation plus two integrations, reducing the overall dimensionality from four to three. The introduction of separable functions also solves the issue of the non-rectangular sparse grid. This separable method can become unstable in certain scenarios and so the slower non-separable integral must be calculated instead. We present a discussion of the optimisation of both approaches. We demonstrate significant speed-ups of ≈100×, arising from a combination of algorithmic improvements and architecture-aware optimisations targeted at improving thread and vectorisation behaviour. The resulting MPI/OpenMP hybrid code is capable of executing on clusters containing processors and/or coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3× and that running the same code across a combination of both microarchitectures improves performance-per-node by a factor of 3.38×. By making bispectrum calculations competitive with those for the power spectrum (or two-point correlator) we are now able to consider joint analysis for cosmological science exploitation of new data.

  2. Climate Signal Detection in Wine Quality Using Gridded vs. Station Data in North-East Hungary

    NASA Astrophysics Data System (ADS)

    Mika, Janos; Razsi, Andras; Gal, Lajos

    2017-04-01

    The grapevine is one of the oldest cultivated plants. Today's viticultural regions for quality wine production are located in relatively narrow geographical and therefore climatic niches. Our target area, the Matra Region in NE Hungary is fairly close to the edge of optimal wine production concerning its climate conditions. Fifty year (1961-2010) wine and quality (natural sugar content, in weight % of must) data are analysed and compared to parallel climate variables. Two sets of station-based monthly temperature, sunshine duration and precipitation data, taken from neighbouring stations, Eger-Kőlyuktető (1961-2010) and Kompolt (1976-2006) are used in 132 combinations, together with daily grid-point data provided by the CarpatClim Project (www.carpatclim-eu.org/pages/home). By now it is clear that (1) wine quality, is in significant negative correlation with the annual precipitation and in positive correlation with temperature and sunshine duration. (2) Applying a wide combination of monthly data we obtain even stronger correlations (higher significance according to t-tests) even from the station-based data, but it is difficult to select and optimum model from the many proper combinations differing in performance over the test sample just slightly. (3) The interpolated site-specific areal averages from the grid-point data provide even better results and stronger differences between the best models and the few other candidates. (4) Further improvement of statistical signal detection capacity of the above climate variables by using 5-day averages, point at the strong vulnerability of wine quality on climate anomalies of some key phenological phases of the investigated grapevine-mixes. Enhanced spatial and temporal resolution provides much better fit to the observed wine quality data. The study has been supported by the OTKA-113209 national project.

  3. Axioms for quantum mechanics: relativistic causality, retrocausality, and the existence of a classical limit

    NASA Astrophysics Data System (ADS)

    Rohrlich, Daniel

    Y. Aharonov and A. Shimony both conjectured that two axioms - relativistic causality (``no superluminal signalling'') and nonlocality - so nearly contradict each other that only quantum mechanics reconciles them. Can we indeed derive quantum mechanics, at least in part, from these two axioms? No: ``PR-box'' correlations show that quantum correlations are not the most nonlocal correlations consistent with relativistic causality. Here we replace ``nonlocality'' with ``retrocausality'' and supplement the axioms of relativistic causality and retrocausality with a natural and minimal third axiom: the existence of a classical limit, in which macroscopic observables commute. That is, just as quantum mechanics has a classical limit, so must any generalization of quantum mechanics. In this limit, PR-box correlations violaterelativistic causality. Generalized to all stronger-than-quantum bipartite correlations, this result is a derivation of Tsirelson's bound (a theorem of quantum mechanics) from the three axioms of relativistic causality, retrocausality and the existence of a classical limit. Although the derivation does not assume quantum mechanics, it points to the Hilbert space structure that underlies quantum correlations. I thank the John Templeton Foundation (Project ID 43297) and the Israel Science Foundation (Grant No. 1190/13) for support.

  4. Theory of two-point correlations of jet noise

    NASA Technical Reports Server (NTRS)

    Ribner, H. S.

    1976-01-01

    A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.

  5. Ion photon emission microscope

    DOEpatents

    Doyle, Barney L.

    2003-04-22

    An ion beam analysis system that creates microscopic multidimensional image maps of the effects of high energy ions from an unfocussed source upon a sample by correlating the exact entry point of an ion into a sample by projection imaging of the ion-induced photons emitted at that point with a signal from a detector that measures the interaction of that ion within the sample. The emitted photons are collected in the lens system of a conventional optical microscope, and projected on the image plane of a high resolution single photon position sensitive detector. Position signals from this photon detector are then correlated in time with electrical effects, including the malfunction of digital circuits, detected within the sample that were caused by the individual ion that created these photons initially.

  6. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.

  7. The mean density and two-point correlation function for the CfA redshift survey slices

    NASA Technical Reports Server (NTRS)

    De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.

    1988-01-01

    The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample.

  8. Galaxy-galaxy weak gravitational lensing in f(R) gravity

    NASA Astrophysics Data System (ADS)

    Li, Baojiu; Shirasaki, Masato

    2018-03-01

    We present an analysis of galaxy-galaxy weak gravitational lensing (GGL) in chameleon f(R) gravity - a leading candidate of non-standard gravity models. For the analysis, we have created mock galaxy catalogues based on dark matter haloes from two sets of numerical simulations, using a halo occupation distribution (HOD) prescription which allows a redshift dependence of galaxy number density. To make a fairer comparison between the f(R) and Λ cold dark matter (ΛCDM) models, their HOD parameters are tuned so that the galaxy two-point correlation functions in real space (and therefore the projected two-point correlation functions) match. While the f(R) model predicts an enhancement of the convergence power spectrum by up to ˜ 30 per cent compared to the standard ΛCDM model with the same parameters, the maximum enhancement of GGL is only half as large and less than 5 per cent on separations above ˜1-2 h-1 Mpc, because the latter is a cross-correlation of shear (or matter, which is more strongly affected by modified gravity) and galaxy (which is weakly affected given the good match between galaxy autocorrelations in the two models) fields. We also study the possibility of reconstructing the matter power spectrum by combination of GGL and galaxy clustering in f(R) gravity. We find that the galaxy-matter cross-correlation coefficient remains at unity down to ˜2-3 h-1 Mpc at relevant redshifts even in f(R) gravity, indicating joint analysis of GGL and galaxy clustering can be a powerful probe of matter density fluctuations in chameleon gravity. The scale dependence of the model differences in their predictions of GGL can potentially allows us to break the degeneracy between f(R) gravity and other cosmological parameters such as Ωm and σ8.

  9. Approach to the origin of turbulence on the basis of two-point kinetic theory

    NASA Technical Reports Server (NTRS)

    Tsuge, S.

    1974-01-01

    Equations for the fluctuation correlation in an incompressible shear flow are derived on the basis of kinetic theory, utilizing the two-point distribution function which obeys the BBGKY hierarchy equation truncated with the hypothesis of 'ternary' molecular chaos. The step from the molecular to the hydrodynamic description is accomplished by a moment expansion which is a two-point version of the thirteen-moment method, and which leads to a series of correlation equations, viz., the two-point counterparts of the continuity equation, the Navier-Stokes equation, etc. For almost parallel shearing flows the two-point equation is separable and reduces to two Orr-Sommerfeld equations with different physical implications.

  10. NOx Emissions Characteristics and Correlation Equations of Two P and W's Axially Staged Sector Combustors Developed Under NASA Environmentally Responsible Aviation (ERA) Project

    NASA Technical Reports Server (NTRS)

    He, Zhuohui J.

    2017-01-01

    Two P&W (Pratt & Whitney)'s axially staged sector combustors have been developed under NASA's Environmentally Responsible Aviation (ERA) project. One combustor was developed under ERA Phase I, and the other was developed under ERA Phase II. Nitrogen oxides (NOx) emissions characteristics and correlation equations for these two sector combustors are reported in this article. The Phase I design was to optimize the NOx emissions reduction potential, while the Phase II design was more practical and robust. Multiple injection points and fuel staging strategies are used in the combustor design. Pilot-stage injectors are located on the front dome plate of the combustor, and main-stage injectors are positioned on the top and bottom (Phase I) or on the top only (Phase II) of the combustor liners downstream. Low power configuration uses only pilot-stage injectors. Main-stage injectors are added to high power configuration to help distribute fuel more evenly and achieve lean burn throughout the combustor yielding very low NOx emissions. The ICAO (International Civil Aviation Organization) landing-takeoff NOx emissions are verified to be 88 percent (Phase I) and 76 percent (Phase II) under the ICAO CAEP/6 (Committee on Aviation Environmental Protection 6th Meeting) standard, exceeding the ERA project goal of 75 percent reduction, and the combustors proved to have stable combustion with room to maneuver on fuel flow splits for operability.

  11. The Impact of Variability of Selected Geological and Mining Parameters on the Value and Risks of Projects in the Hard Coal Mining Industry

    NASA Astrophysics Data System (ADS)

    Kopacz, Michał

    2017-09-01

    The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.

  12. Single shot laser speckle based 3D acquisition system for medical applications

    NASA Astrophysics Data System (ADS)

    Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young

    2018-06-01

    The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.

  13. The correlation function for density perturbations in an expanding universe. II - Nonlinear theory

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1977-01-01

    A formalism is developed to find the two-point and higher-order correlation functions for a given distribution of sizes and shapes of perturbations which are randomly placed in three-dimensional space. The perturbations are described by two parameters such as central density and size, and the two-point correlation function is explicitly related to the luminosity function of groups and clusters of galaxies

  14. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  15. Redshift-space distortions of group and galaxy correlations in the Updated Zwicky Catalog

    NASA Astrophysics Data System (ADS)

    Padilla, N. D.; Merchán, M.; García Lambas, D.; Maia, M. G.

    We calculate two-point correlation functions of galaxies and groups of galaxies selected in three dimensions from the Updated Zwicky Galaxy Catalog - (UZC). The redshift space distortion of the correlation function ξ(σ,π) in the directions parallel and perpendicular to the line of sight, induced by pairwise group peculiar velocities is evaluated. Two methods are used to characterize the pairwise velocity field. The first method consists in fitting the observed ξ(σ,π) with a distorted model with an exponential pairwise velocity distribution, in fixed σ bins. The second method compares the contours of constant predicted correlation function of this model with the data. The results are consistent with a one-dimensional pairwise rms velocity dispersion of groups 1/2=250 ± 110 km/s. We find that UZC galaxy pairwise velocity dispersion is 1/2 = 460 ± 35 km/s. Such findings point towards a smoothly varying peculiar velocity field from galaxies to systems of galaxies, a expected in a hierarchical scenario of structure formation. We estimate the real-space correlation function in the power-law approximation ξ(r)=(r/r0)γ for groups and galaxies in UZC. We obtain the correlation length, r0, from the projected correlation function W(σ)=∫- ∞∞ξ(σ,π)dπ= 2 ∫0∞ ξ(σ,π) dπ using the values of γ derived from the correlation function in projected separations ω(σ). The best fitting parameters are γ=-1.89 ± 0.17 and r0=9.7 ± 4.5 Mpc h-1 for groups, and γ=-2.00 ± 0.03, r0=5.29 ± 0.21 Mpc h-1 for galaxies. We carried out an estimate of the parameter β= Ω0.6/b for groups and galaxies using the linear approximation regime relating the real and the redshift-space correlation functions. We find βgalaxies=0.51 ± 0.15 for galaxies, in agreement with previous works, while for groups we obtain a noisy estimate β < 1.5. We have tested our methods on mock UZC catalogs taken from N-body simulations. The results of these tests show that the conclusions derived from the application of our methods to the observations are reliable and provide a suitable characterization of the spatial correlation and pairwise velocities of groups and galaxies. We also find that the second method, developed in this work, provides more stable and precise results.

  16. The Correlation Dimension of Young Stars in Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Odekon, Mary Crone

    2006-11-01

    We present the correlation dimension of resolved young stars in four actively star-forming dwarf galaxies that are sufficiently resolved and transparent to be modeled as projections of three-dimensional point distributions. We use data from the Hubble Space Telescope archive; photometry for one of the galaxies, UGCA 292, is presented here for the first time. We find that there are statistically distinguishable differences in the nature of stellar clustering among the sample galaxies. The young stars of VII Zw 403, the brightest galaxy in the sample, have the highest value for the correlation dimension and the most dramatic decrease with logarithmic scale, falling from 1.68+/-0.14 to 0.10+/-0.05 over less than a factor of 10 in r. This decrease is consistent with the edge effect produced by a projected Poisson distribution within a 2:2:1 ellipsoid. The young stars in UGC 4483, the faintest galaxy in the sample, exhibit very different behavior, with a constant value of about 0.5 over this same range in r, extending nearly to the edge of the distribution. This behavior may indicate either a scale-free distribution with an unusually low correlation dimension or a two-component (not scale-free) combination of cluster and field stars.

  17. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    NASA Astrophysics Data System (ADS)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only modeling of the covariance matrix obtained by evaluating the covariance function at the observation points. We first assumed that this covariance matrix is stationary and that the unresolved scales are not correlated between the observation points, i.e., the matrix is diagonal, and that the values along the diagonal are constant. Tests with these assumptions were unsuccessful, indicating that a more sophisticated model of the covariance is needed for assimilation of data with nonstationary spectrum. A new method for modeling the covariance matrix based on an extended set of modeling assumptions is proposed. First, it is assumed that the covariance matrix is diagonal, that is, that the unresolved scales are not correlated between the observation points. It is postulated that the values on the diagonal depend on a wavenumber that is characteristic for the unresolved part of the spectrum. It is further postulated that this characteristic wavenumber can be diagnosed from the observations and from the estimate of the projection of the state that is being estimated. It is demonstrated that the new method successfully overcomes previously encountered difficulties.

  18. The 80 megawatt wind power project at Kahuku Point, Hawaii

    NASA Technical Reports Server (NTRS)

    Laessig, R. R.

    1982-01-01

    Windfarms Ltd. is developing the two largest wind energy projects in the world. Designed to produce 80 megawatts at Kahuku Point, Hawaii and 350 megawatts in Solano County, California, these projects will be the prototypes for future large-scale wind energy installations throughout the world.

  19. Precision calculations of the cosmic shear power spectrum projection

    NASA Astrophysics Data System (ADS)

    Kilbinger, Martin; Heymans, Catherine; Asgari, Marika; Joudaki, Shahab; Schneider, Peter; Simon, Patrick; Van Waerbeke, Ludovic; Harnois-Déraps, Joachim; Hildebrandt, Hendrik; Köhlinger, Fabian; Kuijken, Konrad; Viola, Massimo

    2017-12-01

    We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second-order Limber equations for the projection. We find that the impact of adopting these approximations is negligible when constraining cosmological parameters from current weak-lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Telescope Lensing Survey. We find that the reported tension with Planck cosmic microwave background temperature anisotropy results cannot be alleviated. For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for ℓ > 3, with the corresponding errors an order of magnitude below cosmic variance for all ℓ. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package NICAEA at http://www.cosmostat.org/software/nicaea.

  20. Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft

    NASA Technical Reports Server (NTRS)

    Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.

    1987-01-01

    Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.

  1. Avalanche of entanglement and correlations at quantum phase transitions.

    PubMed

    Krutitsky, Konstantin V; Osterloh, Andreas; Schützhold, Ralf

    2017-06-16

    We study the ground-state entanglement in the quantum Ising model with nearest neighbor ferromagnetic coupling J and find a sequential increase of entanglement depth d with growing J. This entanglement avalanche starts with two-point entanglement, as measured by the concurrence, and continues via the three-tangle and four-tangle, until finally, deep in the ferromagnetic phase for J = ∞, arriving at a pure L-partite (GHZ type) entanglement of all L spins. Comparison with the two, three, and four-point correlations reveals a similar sequence and shows strong ties to the above entanglement measures for small J. However, we also find a partial inversion of the hierarchy, where the four-point correlation exceeds the three- and two-point correlations, well before the critical point is reached. Qualitatively similar behavior is also found for the Bose-Hubbard model, suggesting that this is a general feature of a quantum phase transition. This should be taken into account in the approximations starting from a mean-field limit.

  2. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  3. Thermal form factor approach to the ground-state correlation functions of the XXZ chain in the antiferromagnetic massive regime

    NASA Astrophysics Data System (ADS)

    Dugave, Maxime; Göhmann, Frank; Kozlowski, Karol K.; Suzuki, Junji

    2016-09-01

    We use the form factors of the quantum transfer matrix in the zero-temperature limit in order to study the two-point ground-state correlation functions of the XXZ chain in the antiferromagnetic massive regime. We obtain novel form factor series representations of the correlation functions which differ from those derived either from the q-vertex-operator approach or from the algebraic Bethe Ansatz approach to the usual transfer matrix. We advocate that our novel representations are numerically more efficient and allow for a straightforward calculation of the large-distance asymptotic behaviour of the two-point functions. Keeping control over the temperature corrections to the two-point functions we see that these are of order {T}∞ in the whole antiferromagnetic massive regime. The isotropic limit of our result yields a novel form factor series representation for the two-point correlation functions of the XXX chain at zero magnetic field. Dedicated to the memory of Petr Petrovich Kulish.

  4. Dynamical pairwise entanglement and two-point correlations in the three-ligand spin-star structure

    NASA Astrophysics Data System (ADS)

    Motamedifar, M.

    2017-10-01

    We consider the three-ligand spin-star structure through homogeneous Heisenberg interactions (XXX-3LSSS) in the framework of dynamical pairwise entanglement. It is shown that the time evolution of the central qubit ;one-particle; state (COPS) brings about the generation of quantum W states at periodical time instants. On the contrary, W states cannot be generated from the time evolution of a ligand ;one-particle; state (LOPS). We also investigate the dynamical behavior of two-point quantum correlations as well as the expectation values of the different spin-components for each element in the XXX-3LSSS. It is found that when a W state is generated, the same value of the concurrence between any two arbitrary qubits arises from the xx and yy two-point quantum correlations. On the opposite, zz quantum correlation between any two qubits vanishes at these time instants.

  5. The role of large scale motions on passive scalar transport

    NASA Astrophysics Data System (ADS)

    Dharmarathne, Suranga; Araya, Guillermo; Tutkun, Murat; Leonardi, Stefano; Castillo, Luciano

    2014-11-01

    We study direct numerical simulation (DNS) of turbulent channel flow at Reτ = 394 to investigate effect of large scale motions on fluctuating temperature field which forms a passive scalar field. Statistical description of the large scale features of the turbulent channel flow is obtained using two-point correlations of velocity components. Two-point correlations of fluctuating temperature field is also examined in order to identify possible similarities between velocity and temperature fields. The two-point cross-correlations betwen the velocity and temperature fluctuations are further analyzed to establish connections between these two fields. In addition, we use proper orhtogonal decompotion (POD) to extract most dominant modes of the fields and discuss the coupling of large scale features of turbulence and the temperature field.

  6. Promoting gender parity in basic education: Lessons from a technical cooperation project in Yemen

    NASA Astrophysics Data System (ADS)

    Yuki, Takako; Mizuno, Keiko; Ogawa, Keiichi; Mihoko, Sakai

    2013-06-01

    Many girls are not sent to school in Yemen, despite basic education being free as well as compulsory for all children aged 6-15. Aiming to improve girls' enrolment by increasing parental and community involvement, the Japan International Cooperation Agency (JICA) offered a technical cooperation project in June 2005 called Broadening Regional Initiative for Developing Girls' Education (BRIDGE). Phase 1 of this project ran for three and a half years, piloting a participatory school management model supported by school grants in six districts of the Taiz Governorate in the Southwest of Yemen. To find out how successful this approach has been in a traditional society, the authors of this paper analysed the gender parity index (GPI) of the project's pilot schools. Based on data collected at three points in time (in the initial and final years of the project, and two years after the project's end), their findings suggest that interventions in school management which strongly emphasise girls' education can be effective in improving gender parity rather quickly, regardless of the schools' initial conditions. However, the authors also observe that the pilot schools' post-project performance in terms of gender parity is mixed. While the local government allocated budgets for school grants to all pilot schools even after the project's end, training and monitoring activities were cut back. The authors further observe that the variation in performance appears to be significantly correlated with school leaders' initial perceptions of gender equality and with the number of female teachers employed. These findings point to the importance of providing schools with continuous long-term guidance and of monitoring those which implement school improvement programmes.

  7. These two NASA F/A-18 aircraft are flying a test point for the Autonomous Formation Flight project o

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Two NASA F/A-18 aircraft are flying a test point for the Autonomous Formation Flight project over California's Mojave Desert. This second flight phase is mapping the wingtip vortex of the lead aircraft, the Systems Research Aircraft (tail number 847), on the trailing F/A-18 tail number 847. Wingtip vortex is a spiraling wind flowing from the wing during flight. The project is studying the drag and fuel reduction of precision formation flying.

  8. An experimental investigation of a three dimensional wall jet. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Catalano, G. D.

    1977-01-01

    One and two point statistical properties are measured in the flow fields of a coflowing turbulent jet. Two different confining surfaces (one flat, one with large curvature) are placed adjacent to the lip of the circular nozzle; and the resultant effects on the flow field are determined. The one point quantities measured include mean velocities, turbulent intensities, velocity and concentration autocorrelations and power spectral densities, and intermittencies. From the autocorrelation curves, the Taylor microscale and the integral length scale are calculated. Two point quantities measured include velocity and concentration space-time correlations and pressure velocity correlations. From the velocity space-time correlations, iso-correlation contours are constructed along with the lines of maximum maximorum. These lines allow a picture of the flow pattern to be determined. The pressures monitored in the pressure velocity correlations are measured both in the flow field and at the surface of the confining wall(s).

  9. Gravitational Lensing Effect on the Two-Point Correlation of Hot Spots in the Cosmic Microwave Background.

    PubMed

    Takada; Komatsu; Futamase

    2000-04-20

    We investigate the weak gravitational lensing effect that is due to the large-scale structure of the universe on two-point correlations of local maxima (hot spots) in the two-dimensional sky map of the cosmic microwave background (CMB) anisotropy. According to the Gaussian random statistics, as most inflationary scenarios predict, the hot spots are discretely distributed, with some characteristic angular separations on the last scattering surface that are due to oscillations of the CMB angular power spectrum. The weak lensing then causes pairs of hot spots, which are separated with the characteristic scale, to be observed with various separations. We found that the lensing fairly smooths out the oscillatory features of the two-point correlation function of hot spots. This indicates that the hot spot correlations can be a new statistical tool for measuring the shape and normalization of the power spectrum of matter fluctuations from the lensing signatures.

  10. Cross-correlation of point series using a new method

    NASA Technical Reports Server (NTRS)

    Strothers, Richard B.

    1994-01-01

    Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.

  11. Two-Point Microrheology of Phase-Separated Domains in Lipid Bilayers

    PubMed Central

    Hormel, Tristan T.; Reyer, Matthew A.; Parthasarathy, Raghuveer

    2015-01-01

    Though the importance of membrane fluidity for cellular function has been well established for decades, methods for measuring lipid bilayer viscosity remain challenging to devise and implement. Recently, approaches based on characterizing the Brownian dynamics of individual tracers such as colloidal particles or lipid domains have provided insights into bilayer viscosity. For fluids in general, however, methods based on single-particle trajectories provide a limited view of hydrodynamic response. The technique of two-point microrheology, in which correlations between the Brownian dynamics of pairs of tracers report on the properties of the intervening medium, characterizes viscosity at length-scales that are larger than that of individual tracers and has less sensitivity to tracer-induced distortions, but has never been applied to lipid membranes. We present, to our knowledge, the first two-point microrheological study of lipid bilayers, examining the correlated motion of domains in phase-separated lipid vesicles and comparing one- and two-point results. We measure two-point correlation functions in excellent agreement with the forms predicted by two-dimensional hydrodynamic models, analysis of which reveals a viscosity intermediate between those of the two lipid phases, indicative of global fluid properties rather than the viscosity of the local neighborhood of the tracer. PMID:26287625

  12. Higher order correlations of IRAS galaxies

    NASA Technical Reports Server (NTRS)

    Meiksin, Avery; Szapudi, Istvan; Szalay, Alexander

    1992-01-01

    The higher order irreducible angular correlation functions are derived up to the eight-point function, for a sample of 4654 IRAS galaxies, flux-limited at 1.2 Jy in the 60 microns band. The correlations are generally found to be somewhat weaker than those for the optically selected galaxies, consistent with the visual impression of looser clusters in the IRAS sample. It is found that the N-point correlation functions can be expressed as the symmetric sum of products of N - 1 two-point functions, although the correlations above the four-point function are consistent with zero. The coefficients are consistent with the hierarchical clustering scenario as modeled by Hamilton and by Schaeffer.

  13. [The highest proportion of tobacco materials in the blend analysis using PPF projection method for the near-infrared spectrum and Monte Carlo method].

    PubMed

    Mi, Jin-Rui; Ma, Xiang; Zhang, Ya-Juan; Wang, Yi; Wen, Ya-Dong; Zhao, Long-Lian; Li, Jun-Hui; Zhang, Lu-Da

    2011-04-01

    The present paper builds a model based on Monte Carlo method in the projection of the blending tobacco. This model is made up of two parts: the projecting points of tobacco materials, whose coordinates are calculated by means of the PPF (projection based on principal component and Fisher criterion) projection method for the tobacco near-infrared spectrum; and the point of tobacco blend, which is produced by linear additive to the projecting point coordinates of tobacco materials. In order to analyze the projection points deviation from initial state levels, Monte Carlo method is introduced to simulate the differences and changes of raw material projection. The results indicate that there are two major factors affecting the relative deviation: the highest proportion of tobacco materials in the blend, which is too high to make the deviation under control; and the quantity of materials, which is so small to control the deviation. The conclusion is close to the principle of actual formulating designing, particularly, the more in the quantity while the lower in proportion of each. Finally the paper figures out the upper limit of the proportions in the different quantity of materials by theory. It also has important reference value for other agricultural products blend.

  14. Ion-induced electron emission microscopy

    DOEpatents

    Doyle, Barney L.; Vizkelethy, Gyorgy; Weller, Robert A.

    2001-01-01

    An ion beam analysis system that creates multidimensional maps of the effects of high energy ions from an unfocussed source upon a sample by correlating the exact entry point of an ion into a sample by projection imaging of the secondary electrons emitted at that point with a signal from a detector that measures the interaction of that ion within the sample. The emitted secondary electrons are collected in a strong electric field perpendicular to the sample surface and (optionally) projected and refocused by the electron lenses found in a photon emission electron microscope, amplified by microchannel plates and then their exact position is sensed by a very sensitive X Y position detector. Position signals from this secondary electron detector are then correlated in time with nuclear, atomic or electrical effects, including the malfunction of digital circuits, detected within the sample that were caused by the individual ion that created these secondary electrons in the fit place.

  15. Infinite projected entangled-pair state algorithm for ruby and triangle-honeycomb lattices

    NASA Astrophysics Data System (ADS)

    Jahromi, Saeed S.; Orús, Román; Kargarian, Mehdi; Langari, Abdollah

    2018-03-01

    The infinite projected entangled-pair state (iPEPS) algorithm is one of the most efficient techniques for studying the ground-state properties of two-dimensional quantum lattice Hamiltonians in the thermodynamic limit. Here, we show how the algorithm can be adapted to explore nearest-neighbor local Hamiltonians on the ruby and triangle-honeycomb lattices, using the corner transfer matrix (CTM) renormalization group for 2D tensor network contraction. Additionally, we show how the CTM method can be used to calculate the ground-state fidelity per lattice site and the boundary density operator and entanglement entropy (EE) on an infinite cylinder. As a benchmark, we apply the iPEPS method to the ruby model with anisotropic interactions and explore the ground-state properties of the system. We further extract the phase diagram of the model in different regimes of the couplings by measuring two-point correlators, ground-state fidelity, and EE on an infinite cylinder. Our phase diagram is in agreement with previous studies of the model by exact diagonalization.

  16. The ESO Slice Project (ESP) galaxy redshift survey. VII. The redshift and real-space correlation functions

    NASA Astrophysics Data System (ADS)

    Guzzo, L.; Bartlett, J. G.; Cappi, A.; Maurogordato, S.; Zucca, E.; Zamorani, G.; Balkowski, C.; Blanchard, A.; Cayatte, V.; Chincarini, G.; Collins, C. A.; Maccagni, D.; MacGillivray, H.; Merighi, R.; Mignoli, M.; Proust, D.; Ramella, M.; Scaramella, R.; Stirpe, G. M.; Vettolani, G.

    2000-03-01

    We present analyses of the two-point correlation properties of the ESO Slice Project (ESP) galaxy redshift survey, both in redshift and real space. From the redshift-space correlation function $xi (r) i(s) we are able to trace positive clustering out to separations as large as 50 h^{-1} Mpc, after which xi (r) i(s) smoothly breaks down, crossing the zero value between 60 and 80 h^{-1} Mpc. This is best seen from the whole magnitude-limited redshift catalogue, using the J_3 miniμm-variance weighting estimator. xi (r) i(s) is reasonably well described by a shallow power law with \\gamma\\sim 1.5 between 3 and 50 h^{-1} Mpc, while on smaller scales (0.2-2 h^{-1} Mpc) it has a shallower slope (\\gamma\\sim 1). This flattening is shown to be mostly due to the redshift-space damping produced by virialized structures, and is less evident when volume-limited samples of the survey are analysed. We examine the full effect of redshift-space distortions by computing the two-dimensional correlation function xi (r) i(r_p,\\pi) , from which we project out the real-space xi (r) i(r) below 10 h^{-1} Mpc. This function is well described by a power-law model (r/r_o)^{-\\gamma}, with r_o=4.15^{+0.20}_{-0.21} h^{-1} Mpc and \\gamma=1.67^{+0.07}_{-0.09} for the whole magnitude-limited catalogue. Comparison to other redshift surveys shows a consistent picture in which galaxy clustering remains positive out to separations of 50 h^{-1} Mpc or larger, in substantial agreement with the results obtained from angular surveys like the APM and EDSGC. Also the shape of the two-point correlation function is remarkably unanimous among these data sets, in all cases requiring more power on scales larger than 5 h^{-1} Mpc (a `shoulder'), with respect to a simple extrapolation of the canonical xi (r) i(r) =(r/5)^{-1.8}. The analysis of xi (r) i(s) for volume-limited subsamples with different luminosity shows evidence of luminosity segregation only for the most luminous sample with Mb_J <= -20.5. For these galaxies, the amplitude of clustering is on all scales >4 h^{-1} Mpc about a factor of 2 above that of all other subsamples containing less luminous galaxies. When redshift-space distortions are removed through projection of xi (r) i(r_p,\\pi) , however, a weak dependence on luminosity is seen at small separations also at fainter magnitudes, resulting in a growth of r_o from 3.45_{-0.30}^{+0.21} h^{-1} Mpc to 5.15_{-0.44}^{+0.39} h^{-1} Mpc, when the limiting absolute magnitude of the sample changes from M=-18.5 to M=-20. This effect is masked in redshift space, as the mean pairwise velocity dispersion experiences a parallel increase, basically erasing the effect of the clustering growth on xi (r) i(s) . Based on observations collected at the European Southern Observatory, La Silla, Chile.}

  17. Merging symmetry projection methods with coupled cluster theory: Lessons from the Lipkin model Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahlen-Strothman, J. M.; Henderson, T. H.; Hermes, M. R.

    Coupled cluster and symmetry projected Hartree-Fock are two central paradigms in electronic structure theory. However, they are very different. Single reference coupled cluster is highly successful for treating weakly correlated systems, but fails under strong correlation unless one sacrifices good quantum numbers and works with broken-symmetry wave functions, which is unphysical for finite systems. Symmetry projection is effective for the treatment of strong correlation at the mean-field level through multireference non-orthogonal configuration interaction wavefunctions, but unlike coupled cluster, it is neither size extensive nor ideal for treating dynamic correlation. We here examine different scenarios for merging these two dissimilar theories.more » We carry out this exercise over the integrable Lipkin model Hamiltonian, which despite its simplicity, encompasses non-trivial physics for degenerate systems and can be solved via diagonalization for a very large number of particles. We show how symmetry projection and coupled cluster doubles individually fail in different correlation limits, whereas models that merge these two theories are highly successful over the entire phase diagram. Despite the simplicity of the Lipkin Hamiltonian, the lessons learned in this work will be useful for building an ab initio symmetry projected coupled cluster theory that we expect to be accurate in the weakly and strongly correlated limits, as well as the recoupling regime.« less

  18. The correlation function for density perturbations in an expanding universe. III The three-point and predictions of the four-point and higher order correlation functions

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1978-01-01

    Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.

  19. Attenuated coupled cluster: a heuristic polynomial similarity transformation incorporating spin symmetry projection into traditional coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Gomez, John A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2017-11-01

    In electronic structure theory, restricted single-reference coupled cluster (CC) captures weak correlation but fails catastrophically under strong correlation. Spin-projected unrestricted Hartree-Fock (SUHF), on the other hand, misses weak correlation but captures a large portion of strong correlation. The theoretical description of many important processes, e.g. molecular dissociation, requires a method capable of accurately capturing both weak and strong correlation simultaneously, and would likely benefit from a combined CC-SUHF approach. Based on what we have recently learned about SUHF written as particle-hole excitations out of a symmetry-adapted reference determinant, we here propose a heuristic CC doubles model to attenuate the dominant spin collective channel of the quadratic terms in the CC equations. Proof of principle results presented here are encouraging and point to several paths forward for improving the method further.

  20. Expression of pain and distress in children during dental extractions through drawings as a projective measure: A clinical study.

    PubMed

    Pala, Sai Priya; Nuvvula, Sivakumar; Kamatham, Rekhalakshmi

    2016-02-08

    To evaluate the efficacy of drawings as a projective measure of pain and distress in children undergoing dental extractions. Children in the age range of 4-13 years with existence of untreatable caries or over-retained primary teeth, indicated for extractions were included. Pain was assessed using one behavioral [faces, legs, activity, cry and consolability (FLACC)] scale; and a self report measure; faces pain scale-revised (FPS-R), at two points of time, after completion of local anesthetic administration and after extraction. The general behavior of children was assessed with Wright's modification of Frankl rating scale. At the end of the session, children were instructed to represent, themselves along with the dentist and their experiences of the dental treatment through drawing. The drawings were scored utilizing Child drawing: Hospital scale (CD: H) manual and correlated with FLACC, FPS-R and Frankl using Pearson correlation test. A positive correlation, though statistically not significant, was observed between CD: H scores and all other considered parameters (Frankl, FPS-R and FLACC) in the present study. Drawings could not act as surrogate measure of child's pain; however, they acted as a narrative of his/her experiences and reflection of inner emotions. Hence, drawings can be used as an additional dental armamentarium.

  1. Projected Hartree-Fock theory as a polynomial of particle-hole excitations and its combination with variational coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Qiu, Yiheng; Henderson, Thomas M.; Scuseria, Gustavo E.

    2017-05-01

    Projected Hartree-Fock theory provides an accurate description of many kinds of strong correlations but does not properly describe weakly correlated systems. Coupled cluster theory, in contrast, does the opposite. It therefore seems natural to combine the two so as to describe both strong and weak correlations with high accuracy in a relatively black-box manner. Combining the two approaches, however, is made more difficult by the fact that the two techniques are formulated very differently. In earlier work, we showed how to write spin-projected Hartree-Fock in a coupled-cluster-like language. Here, we fill in the gaps in that earlier work. Further, we combine projected Hartree-Fock and coupled cluster theory in a variational formulation and show how the combination performs for the description of the Hubbard Hamiltonian and for several small molecular systems.

  2. Lessons Learned about Plug-in Electric Vehicle Charging Infrastructure from The EV Project and ChargePoint America

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smart, John Galloway; Salisbury, Shawn Douglas

    2015-07-01

    This report summarizes key findings in two national plug-in electric vehicle charging infrastructure demonstrations: The EV Project and ChargePoint America. It will be published to the INL/AVTA website for the general public.

  3. AIR TOXICS ASSESSMENT REFINEMENT IN RAPCA'S JURISDICTION - DAYTON, OH AREA

    EPA Science Inventory

    RAPCA has receive two grants to conduct this project. As part of the original project, RAPCA has improved and expanded their point source inventory by converting the following area sources to point sources: dry cleaners, gasoline throughput processes and halogenated solvent clea...

  4. Effective correlator for RadioAstron project

    NASA Astrophysics Data System (ADS)

    Sergeev, Sergey

    This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.

  5. Spatial correlation of hydrometeor occurrence, reflectivity, and rain rate from CloudSat

    NASA Astrophysics Data System (ADS)

    Marchand, Roger

    2012-03-01

    This paper examines the along-track vertical and horizontal structure of hydrometeor occurrence, reflectivity, and column rain rate derived from CloudSat. The analysis assumes hydrometeors statistics in a given region are horizontally invariant, with the probability of hydrometeor co-occurrence obtained simply by determining the relative frequency at which hydrometeors can be found at two points (which may be at different altitudes and offset by a horizontal distance, Δx). A correlation function is introduced (gamma correlation) that normalizes hydrometeor co-occurrence values to the range of 1 to -1, with a value of 0 meaning uncorrelated in the usual sense. This correlation function is a generalization of the alpha overlap parameter that has been used in recent studies to describe the overlap between cloud (or hydrometeor) layers. Examples of joint histograms of reflectivity at two points are also examined. The analysis shows that the traditional linear (or Pearson) correlation coefficient provides a useful one-to-one measure of the strength of the relationship between hydrometeor reflectivity at two points in the horizontal (that is, two points at the same altitude). While also potentially useful in the vertical direction, the relationship between reflectivity values at different altitudes is not as well described by the linear correlation coefficient. The decrease in correlation of hydrometeor occurrence and reflectivity with horizontal distance, as well as precipitation occurrence and column rain rate, can be reasonably well fit with a simple two-parameter exponential model. In this paper, the North Pacific and tropical western Pacific are examined in detail, as is the zonal dependence.

  6. Spatial Correlation of Solar-Wind Turbulence from Two-Point Measurements

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Milano, L. J.; Dasso, S.; Weygand, J. M.; Smith, C. W.; Kivelson, M. G.

    2005-01-01

    Interplanetary turbulence, the best studied case of low frequency plasma turbulence, is the only directly quantified instance of astrophysical turbulence. Here, magnetic field correlation analysis, using for the first time only proper two-point, single time measurements, provides a key step in unraveling the space-time structure of interplanetary turbulence. Simultaneous magnetic field data from the Wind, ACE, and Cluster spacecraft are analyzed to determine the correlation (outer) scale, and the Taylor microscale near Earth's orbit.

  7. Reference-point-independent dynamics of molecular liquids and glasses in the tensorial formalism

    NASA Astrophysics Data System (ADS)

    Schilling, Rolf

    2002-05-01

    We apply the tensorial formalism to the dynamics of molecular liquids and glasses. This formalism separates the degrees of freedom into translational and orientational ones. Using the Mori-Zwanzig projection formalism, the equations of motion for the tensorial density correlators Slmn,l'm'n'(q-->,t) are derived. For this we show how to choose the slow variables such that the resulting Mori-Zwanzig equations are covariant under a change of the reference point of the body fixed frame. We also prove that the memory kernels obtained from mode-coupling theory (MCT) including all approximations preserve the covariance. This covariance makes, e.g., the glass transition point, the two universal scaling laws and particularly the corresponding exponents independent on the reference point and on the mass and moments of inertia, i.e., they only depend on the properties of the potential energy landscape. Finally, we show that the corresponding MCT questions for linear molecules can be obtained from those for arbitrary molecules and that they differ from earlier equations that are not covariant.

  8. Second feature of the matter two-point function

    NASA Astrophysics Data System (ADS)

    Tansella, Vittorio

    2018-05-01

    We point out the existence of a second feature in the matter two-point function, besides the acoustic peak, due to the baryon-baryon correlation in the early Universe and positioned at twice the distance of the peak. We discuss how the existence of this feature is implied by the well-known heuristic argument that explains the baryon bump in the correlation function. A standard χ2 analysis to estimate the detection significance of the second feature is mimicked. We conclude that, for realistic values of the baryon density, a SKA-like galaxy survey will not be able to detect this feature with standard correlation function analysis.

  9. Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data

    NASA Astrophysics Data System (ADS)

    Du, L.; Zhong, R.; Sun, H.; Wu, Q.

    2017-09-01

    An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel

  10. Pedagogical Content Knowledge in Special Needs Education: A Case Study of an Art Project with the Multiple/Severe Handicapped

    ERIC Educational Resources Information Center

    Murayama, Taku

    2016-01-01

    This paper focuses on a project in teacher education through art activities at the undergraduate level. The main theme is art activities by university students and multiple and severe handicapped students. This project has two significant points for the preparation of special education teachers. One point is the opportunity for field work. Even…

  11. The distribution of galaxies within the 'Great Wall'

    NASA Technical Reports Server (NTRS)

    Ramella, Massimo; Geller, Margaret J.; Huchra, John P.

    1992-01-01

    The galaxy distribution within the 'Great Wall', the most striking feature in the first three 'slices' of the CfA redshift survey extension is examined. The Great Wall is extracted from the sample and is analyzed by counting galaxies in cells. The 'local' two-point correlation function within the Great Wall is computed and the local correlation length, is estimated 15/h Mpc, about 3 times larger than the correlation length for the entire sample. The redshift distribution of galaxies in the pencil-beam survey by Broadhurst et al. (1990) shows peaks separated about by large 'voids', at least to a redshift of about 0.3. The peaks might represent the intersections of their about 5/h Mpc pencil beams with structures similar to the Great Wall. Under this hypothesis, sampling of the Great Walls shows that l approximately 12/h Mpc is the minimum projected beam size required to detect all the 'walls' at redshifts between the peak of the selection function and the effective depth of the survey.

  12. A short note on the maximal point-biserial correlation under non-normality.

    PubMed

    Cheng, Ying; Liu, Haiyan

    2016-11-01

    The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.

  13. Intrinsic alignments of galaxies in the MassiveBlack-II simulation: analysis of two-point statistics

    NASA Astrophysics Data System (ADS)

    Tenneti, Ananth; Singh, Sukhdeep; Mandelbaum, Rachel; di Matteo, Tiziana; Feng, Yu; Khandai, Nishikanta

    2015-04-01

    The intrinsic alignment of galaxies with the large-scale density field is an important astrophysical contaminant in upcoming weak lensing surveys. We present detailed measurements of the galaxy intrinsic alignments and associated ellipticity-direction (ED) and projected shape (wg+) correlation functions for galaxies in the cosmological hydrodynamic MassiveBlack-II simulation. We carefully assess the effects on galaxy shapes, misalignment of the stellar component with the dark matter shape and two-point statistics of iterative weighted (by mass and luminosity) definitions of the (reduced and unreduced) inertia tensor. We find that iterative procedures must be adopted for a reliable measurement of the reduced tensor but that luminosity versus mass weighting has only negligible effects. Both ED and wg+ correlations increase in amplitude with subhalo mass (in the range of 1010-6.0 × 1014 h-1 M⊙), with a weak redshift dependence (from z = 1 to 0.06) at fixed mass. At z ˜ 0.3, we predict a wg+ that is in reasonable agreement with Sloan Digital Sky Survey luminous red galaxy measurements and that decreases in amplitude by a factor of ˜5-18 for galaxies in the Large Synoptic Survey Telescope survey. We also compared the intrinsic alignments of centrals and satellites, with clear detection of satellite radial alignments within their host haloes. Finally, we show that wg+ (using subhaloes as tracers of density) and wδ+ (using dark matter density) predictions from the simulations agree with that of non-linear alignment (NLA) models at scales where the two-halo term dominates in the correlations (and tabulate associated NLA fitting parameters). The one-halo term induces a scale-dependent bias at small scales which is not modelled in the NLA model.

  14. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  15. Flow separation in a straight draft tube, particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Duquesne, P.; Maciel, Y.; Ciocan, G. D.; Deschênes, C.

    2014-03-01

    As part of the BulbT project, led by the Consortium on Hydraulic Machines and the LAMH (Hydraulic Machine Laboratory of Laval University), the efficiency and power break off in a bulb turbine has been investigated. Previous investigations correlated the break off to draft tube losses. Tuft visualizations confirmed the emergence of a flow separation zone at the wall of the diffuser. Opening the guide vanes tends to extend the recirculation zone. The flow separations were investigated with two-dimensional and two-component particle image velocimetry (PIV) measurements designed based on the information collected from tuft visualizations. Investigations were done for a high opening blade angle with a N11 of 170 rpm, at best efficiency point and at two points with a higher Q11. The second operating point is inside the efficiency curve break off and the last operating point corresponds to a lower efficiency and a larger recirculation region in the draft tube. The PIV measurements were made near the wall with two cameras in order to capture two measurement planes simultaneously. The instantaneous velocity fields were acquired at eight different planes. Two planes located near the bottom wall were parallel to the generatrix of the conical part of the diffuser, while two other bottom planes diverged more from the draft tube axis than the cone generatrix. The last four planes were located on the draft tube side and diverged more from the draft tube axis than the cone generatrix. By combining the results from the various planes, the separation zone is characterized using pseudo-streamlines of the mean velocity fields, maps of the Reynolds stresses and maps of the reverse-flow parameter. The analysis provides an estimation of the separation zone size, shape and unsteady character, and their evolution with the guide vanes opening.

  16. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  17. On the universality of the two-point galaxy correlation function

    NASA Technical Reports Server (NTRS)

    Davis, Marc; Meiksin, Avery; Strauss, Michael A.; Da Costa, L. Nicolaci; Yahil, Amos

    1988-01-01

    The behavior of the two-point galaxy correlation function in volume-limited subsamples of three complete redshift surveys is investigated. The correlation length is shown to scale approximately as the square root of the distance limit in both the CfA and Southern Sky catalogs, but to be independent of the distance limit in the IRAS sample. This effect is found to be due to factors such as the large positive density fluctuations in the foreground of the optically selected catalogs biasing the correlation length estimate downward, and the brightest galaxies appearing to be more strongly clustered than the mean.

  18. A tensor product state approach to spin-1/2 square J1-J2 antiferromagnetic Heisenberg model: evidence for deconfined quantum criticality

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Gu, Zheng-Cheng; Verstraete, Frank; Wen, Xiang-Gang

    We study this model using the cluster update algorithm for tensor product states (TPSs). We find that the ground state energies at finite sizes and in the thermodynamic limit are in good agreement with the exact diagonalization study. At the largest bond dimension available D = 9 and through finite size scaling of the magnetization order near the transition point, we accurately determine the critical point J2c1 = 0 . 53 (1) J1 and the critical exponents β = 0 . 50 (4) . In the intermediate region we find a paramagnetic ground state without any static valence bond solid (VBS) order, supported by an exponentially decaying spin-spin correlation while a power law decaying dimer-dimer correlation. By fitting a universal scaling function for the spin-spin correlation we find the critical exponents ν = 0 . 68 (3) and ηs = 0 . 34 (6) , which is very close to the observed critical exponents for deconfined quantum critical point (DQCP) in other systems. Thus our numerical results strongly suggest a Landau forbidden phase transition from Neel order to VBS order at J2c1 = 0 . 53 (1) J1 . This project is supported by the EU Strep project QUEVADIS, the ERC Grant QUERG, and the FWF SFB Grants FoQuS and ViCoM; and the Institute for Quantum Information and Matter.

  19. Evaluation of Denoising Strategies to Address Motion-Correlated Artifacts in Resting-State Functional Magnetic Resonance Imaging Data from the Human Connectome Project

    PubMed Central

    Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.

    2016-01-01

    Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276

  20. Measuring correlations in non-separable vector beams using projective measurements

    NASA Astrophysics Data System (ADS)

    Subramanian, Keerthan; Viswanathan, Nirmal K.

    2017-09-01

    Doubts regarding the completeness of quantum mechanics as raised by Einstein, Podolsky and Rosen(EPR) have predominantly been resolved by resorting to a measurement of correlations between entangled photons which clearly demonstrate violation of Bell's inequality. This article is an attempt to reconcile incompatibility of hidden variable theories with reality by demonstrating experimentally a violation of Bell's inequality in locally correlated systems whose two degrees of freedom, the spin and orbital angular momentum, are maximally correlated. To this end we propose and demonstrate a linear, achromatic modified Sagnac interferometer to project orbital angular momentum states which we combine with spin projections to measure correlations.

  1. Analysis of data from NASA B-57B gust gradient program

    NASA Technical Reports Server (NTRS)

    Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.

    1985-01-01

    Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.

  2. High spatial resolution detection of low-energy electrons using an event-counting method, application to point projection microscopy

    NASA Astrophysics Data System (ADS)

    Salançon, Evelyne; Degiovanni, Alain; Lapena, Laurent; Morin, Roger

    2018-04-01

    An event-counting method using a two-microchannel plate stack in a low-energy electron point projection microscope is implemented. 15 μm detector spatial resolution, i.e., the distance between first-neighbor microchannels, is demonstrated. This leads to a 7 times better microscope resolution. Compared to previous work with neutrons [Tremsin et al., Nucl. Instrum. Methods Phys. Res., Sect. A 592, 374 (2008)], the large number of detection events achieved with electrons shows that the local response of the detector is mainly governed by the angle between the hexagonal structures of the two microchannel plates. Using this method in point projection microscopy offers the prospect of working with a greater source-object distance (350 nm instead of 50 nm), advancing toward atomic resolution.

  3. Environments of z~0.2 Star Forming Galaxies: Building on the Citizen Science Discovery of the Green Peas

    NASA Astrophysics Data System (ADS)

    Cardamone, Carolin; Cappelluti, Nico; Powell, Meredith; Urry, Meg; Galaxy Zoo Science Team

    2018-01-01

    ‘Green Pea’ galaxies, discovered in the Galaxy Zoo citizen science project, are rare low-mass (M < 1 x 1010 M⊙) galaxies, experiencing an episode of compact, relatively low-metalicity (z ≈ 1/5 z⊙), intense starformation (3-60 M⊙/yr). While their spectra have been investigated in a wide-array of follow-up studies, a detailed study of their environments is missing. Two-point correlation functions have been used to show the environmental dependence of an array of galaxy properties (eg., mass, luminosity, color, star formation, and morphology). In this study, we present a cross-correlation analysis between the Green Peas and the Luminous Red Galaxies throughout the SDSS footprint, and we find that the population of Green Peas at 0.11

  4. Plane mixing layer vortical structure kinematics

    NASA Technical Reports Server (NTRS)

    Leboeuf, Richard L.

    1993-01-01

    The objective of the current project was to experimentally investigate the structure and dynamics of the streamwise vorticity in a plane mixing layer. The first part of this research program was intended to clarify whether the observed decrease in mean streamwise vorticity in the far-field of mixing layers is due primarily to the 'smearing' caused by vortex meander or to diffusion. Two-point velocity correlation measurements have been used to show that there is little spanwise meander of the large-scale streamwise vortical structure. The correlation measurements also indicate a large degree of transverse meander of the streamwise vorticity which is not surprising since the streamwise vorticity exists in the inclined braid region between the spanwise vortex core regions. The streamwise convection of the braid region thereby introduces an apparent transverse meander into measurements using stationary probes. These results corroborated with estimated secondary velocity profiles in which the streamwise vorticity produces a signature which was tracked in time.

  5. One-norm geometric quantum discord and critical point estimation in the XY spin chain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Chang-Cheng; Wang, Yao; Guo, Jin-Liang, E-mail: guojinliang80@163.com

    2016-11-15

    In contrast with entanglement and quantum discord (QD), we investigate the thermal quantum correlation in terms of Schatten one-norm geometric quantum discord (GQD) in the XY spin chain, and analyze their capabilities in detecting the critical point of quantum phase transition. We show that the one-norm GQD can reveal more properties about quantum correlation between two spins, especially for the long-range quantum correlation at finite temperature. Under the influences of site distance, anisotropy and temperature, one-norm GQD and its first derivative make it possible to detect the critical point efficiently for a general XY spin chain. - Highlights: • Comparingmore » with entanglement and QD, one-norm GQD is more robust versus the temperature. • One-norm GQD is more efficient in characterization of long-range quantum correlation between two distant qubits. • One-norm GQD performs well in highlighting the critical point of QPT at zero or low finite temperature. • One-norm GQD has a number of advantages over QD in detecting the critical point of the spin chain.« less

  6. Advances in QCD sum-rule calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melikhov, Dmitri

    2016-01-22

    We review the recent progress in the applications of QCD sum rules to hadron properties with the emphasis on the following selected problems: (i) development of new algorithms for the extraction of ground-state parameters from two-point correlators; (ii) form factors at large momentum transfers from three-point vacuum correlation functions: (iii) properties of exotic tetraquark hadrons from correlation functions of four-quark currents.

  7. Novel Optical Technique Developed and Tested for Measuring Two-Point Velocity Correlations in Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Zimmerli, Gregory A.; Goldburg, Walter I.

    2002-01-01

    A novel technique for characterizing turbulent flows was developed and tested at the NASA Glenn Research Center. The work is being done in collaboration with the University of Pittsburgh, through a grant from the NASA Microgravity Fluid Physics Program. The technique we are using, Homodyne Correlation Spectroscopy (HCS), is a laser-light-scattering technique that measures the Doppler frequency shift of light scattered from microscopic particles in the fluid flow. Whereas Laser Doppler Velocimetry gives a local (single-point) measurement of the fluid velocity, the HCS technique measures correlations between fluid velocities at two separate points in the flow at the same instant of time. Velocity correlations in the flow field are of fundamental interest to turbulence researchers and are of practical importance in many engineering applications, such as aeronautics.

  8. The Combined Use of Correlative and Mechanistic Species Distribution Models Benefits Low Conservation Status Species.

    PubMed

    Rougier, Thibaud; Lassalle, Géraldine; Drouineau, Hilaire; Dumoulin, Nicolas; Faure, Thierry; Deffuant, Guillaume; Rochard, Eric; Lambert, Patrick

    2015-01-01

    Species can respond to climate change by tracking appropriate environmental conditions in space, resulting in a range shift. Species Distribution Models (SDMs) can help forecast such range shift responses. For few species, both correlative and mechanistic SDMs were built, but allis shad (Alosa alosa), an endangered anadromous fish species, is one of them. The main purpose of this study was to provide a framework for joint analyses of correlative and mechanistic SDMs projections in order to strengthen conservation measures for species of conservation concern. Guidelines for joint representation and subsequent interpretation of models outputs were defined and applied. The present joint analysis was based on the novel mechanistic model GR3D (Global Repositioning Dynamics of Diadromous fish Distribution) which was parameterized on allis shad and then used to predict its future distribution along the European Atlantic coast under different climate change scenarios (RCP 4.5 and RCP 8.5). We then used a correlative SDM for this species to forecast its distribution across the same geographic area and under the same climate change scenarios. First, projections from correlative and mechanistic models provided congruent trends in probability of habitat suitability and population dynamics. This agreement was preferentially interpreted as referring to the species vulnerability to climate change. Climate change could not be accordingly listed as a major threat for allis shad. The congruence in predicted range limits between SDMs projections was the next point of interest. The difference, when noticed, required to deepen our understanding of the niche modelled by each approach. In this respect, the relative position of the northern range limit between the two methods strongly suggested here that a key biological process related to intraspecific variability was potentially lacking in the mechanistic SDM. Based on our knowledge, we hypothesized that local adaptations to cold temperatures deserved more attention in terms of modelling, but further in conservation planning as well.

  9. The Combined Use of Correlative and Mechanistic Species Distribution Models Benefits Low Conservation Status Species

    PubMed Central

    Rougier, Thibaud; Lassalle, Géraldine; Drouineau, Hilaire; Dumoulin, Nicolas; Faure, Thierry; Deffuant, Guillaume; Rochard, Eric; Lambert, Patrick

    2015-01-01

    Species can respond to climate change by tracking appropriate environmental conditions in space, resulting in a range shift. Species Distribution Models (SDMs) can help forecast such range shift responses. For few species, both correlative and mechanistic SDMs were built, but allis shad (Alosa alosa), an endangered anadromous fish species, is one of them. The main purpose of this study was to provide a framework for joint analyses of correlative and mechanistic SDMs projections in order to strengthen conservation measures for species of conservation concern. Guidelines for joint representation and subsequent interpretation of models outputs were defined and applied. The present joint analysis was based on the novel mechanistic model GR3D (Global Repositioning Dynamics of Diadromous fish Distribution) which was parameterized on allis shad and then used to predict its future distribution along the European Atlantic coast under different climate change scenarios (RCP 4.5 and RCP 8.5). We then used a correlative SDM for this species to forecast its distribution across the same geographic area and under the same climate change scenarios. First, projections from correlative and mechanistic models provided congruent trends in probability of habitat suitability and population dynamics. This agreement was preferentially interpreted as referring to the species vulnerability to climate change. Climate change could not be accordingly listed as a major threat for allis shad. The congruence in predicted range limits between SDMs projections was the next point of interest. The difference, when noticed, required to deepen our understanding of the niche modelled by each approach. In this respect, the relative position of the northern range limit between the two methods strongly suggested here that a key biological process related to intraspecific variability was potentially lacking in the mechanistic SDM. Based on our knowledge, we hypothesized that local adaptations to cold temperatures deserved more attention in terms of modelling, but further in conservation planning as well. PMID:26426280

  10. Roadmap for searching cosmic rays correlated with the extraterrestrial neutrinos seen at IceCube

    NASA Astrophysics Data System (ADS)

    Carpio, J. A.; Gago, A. M.

    2017-06-01

    We have built sky maps showing the expected arrival directions of 120 EeV ultrahigh-energy cosmic rays (UHECRs) directionally correlated with the latest astrophysical neutrino tracks observed at IceCube, including the four-year high-energy starting events (HESEs) and the two-year northern tracks, taken as point sources. We have considered contributions to UHECR deflections from the Galactic and the extragalactic magnetic field and a UHECR composition compatible with the current expectations. We have used the Jansson-Farrar JF12 model for the Galactic magnetic field and an extragalactic magnetic field strength of 1 nG and coherence length of 1 Mpc. We observe that the regions outside of the Galactic plane are more strongly correlated with the neutrino tracks than those adjacent to or in it, where IceCube HESE events 37 and 47 are good candidates to search for excesses, or anisotropies, in the UHECR flux. On the other hand, clustered northern tracks around (l ,b )=(0 ° ,-3 0 ° ) and (l ,b )=(-15 0 ° ,-3 0 ° ) are promising candidates for a stacked point source search. For example, we have focused on the region of UHECR arrival directions, at 150 EeV, correlated with IceCube HESE event 37 located at (l ,b )=(-137.1 ° ,65.8 ° ) in the northern hemisphere, far away from the Galactic plane, obtaining an angular size ˜5 ° , being ˜3 ° for 200 EeV and ˜8 ° for 120 EeV. We report a p value of 0.20 for a stacked point source search using current Auger and Telescope Array data, consistent with current results from both collaborations. Using Telescope Array data alone, we found a projected live time of 72 years to find correlations, but clearly this must improve with the planned Auger upgrade.

  11. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  12. Expression of pain and distress in children during dental extractions through drawings as a projective measure: A clinical study

    PubMed Central

    Pala, Sai Priya; Nuvvula, Sivakumar; Kamatham, Rekhalakshmi

    2016-01-01

    AIM: To evaluate the efficacy of drawings as a projective measure of pain and distress in children undergoing dental extractions. METHODS: Children in the age range of 4-13 years with existence of untreatable caries or over-retained primary teeth, indicated for extractions were included. Pain was assessed using one behavioral [faces, legs, activity, cry and consolability (FLACC)] scale; and a self report measure; faces pain scale-revised (FPS-R), at two points of time, after completion of local anesthetic administration and after extraction. The general behavior of children was assessed with Wright’s modification of Frankl rating scale. At the end of the session, children were instructed to represent, themselves along with the dentist and their experiences of the dental treatment through drawing. The drawings were scored utilizing Child drawing: Hospital scale (CD: H) manual and correlated with FLACC, FPS-R and Frankl using Pearson correlation test. RESULTS: A positive correlation, though statistically not significant, was observed between CD: H scores and all other considered parameters (Frankl, FPS-R and FLACC) in the present study. CONCLUSION: Drawings could not act as surrogate measure of child’s pain; however, they acted as a narrative of his/her experiences and reflection of inner emotions. Hence, drawings can be used as an additional dental armamentarium. PMID:26862509

  13. Combining symmetry collective states with coupled-cluster theory: Lessons from the Agassi model Hamiltonian

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew R.; Dukelsky, Jorge; Scuseria, Gustavo E.

    2017-06-01

    The failures of single-reference coupled-cluster theory for strongly correlated many-body systems is flagged at the mean-field level by the spontaneous breaking of one or more physical symmetries of the Hamiltonian. Restoring the symmetry of the mean-field determinant by projection reveals that coupled-cluster theory fails because it factorizes high-order excitation amplitudes incorrectly. However, symmetry-projected mean-field wave functions do not account sufficiently for dynamic (or weak) correlation. Here we pursue a merger of symmetry projection and coupled-cluster theory, following previous work along these lines that utilized the simple Lipkin model system as a test bed [J. Chem. Phys. 146, 054110 (2017), 10.1063/1.4974989]. We generalize the concept of a symmetry-projected mean-field wave function to the concept of a symmetry projected state, in which the factorization of high-order excitation amplitudes in terms of low-order ones is guided by symmetry projection and is not exponential, and combine them with coupled-cluster theory in order to model the ground state of the Agassi Hamiltonian. This model has two separate channels of correlation and two separate physical symmetries which are broken under strong correlation. We show how the combination of symmetry collective states and coupled-cluster theory is effective in obtaining correlation energies and order parameters of the Agassi model throughout its phase diagram.

  14. A Second Generation Swirl-Venturi Lean Direct Injection Combustion Concept

    NASA Technical Reports Server (NTRS)

    Tacina, Kathleen M.; Chang, Clarence T.; He, Zhuohui Joe; Lee, Phil; Dam, Bidhan; Mongia, Hukam

    2014-01-01

    A low-NO (sub x) aircraft gas turbine engine combustion concept was developed and tested. The concept is a second generation swirl-venturi lean direct injection (SV-LDI) concept. LDI is a lean-burn combustion concept in which the fuel is injected directly into the flame zone. Three second generation SV-LDI configurations were developed. All three were based on the baseline 9-point SV-LDI configuration reported previously. These second generation configurations had better low power operability than the baseline 9-point configuration. Two of these second generation configurations were tested in a NASA Glenn Research Center flametube; these two configurations are called the at dome and 5-recess configurations. Results show that the 5-recess configuration generally had lower NO (sub x) emissions than the flat dome configuration. Correlation equations were developed for the flat dome configuration so that the landing-takeoff NO (sub x) emissions could be estimated. The flat dome landing-takeoff NO (sub x) is estimated to be 87-88 percent below the CAEP/6 standards, exceeding the ERA project goal of 75 percent reduction.

  15. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  16. Kraus Operators for a Pair of Interacting Qubits: a Case Study

    NASA Astrophysics Data System (ADS)

    Arsenijević, M.; Jeknić-Dugić, J.; Dugić, M.

    2018-04-01

    The Kraus form of the completely positive dynamical maps is appealing from the mathematical and the point of the diverse applications of the open quantum systems theory. Unfortunately, the Kraus operators are poorly known for the two-qubit processes. In this paper, we derive the Kraus operators for a pair of interacting qubits, while the strength of the interaction is arbitrary. One of the qubits is subjected to the x-projection spin measurement. The obtained results are applied to calculate the dynamics of the entanglement in the qubits system. We obtain the loss of the correlations in the finite time interval; the stronger the inter-qubit interaction, the longer lasting entanglement in the system.

  17. Simple method for experimentally testing any form of quantum contextuality

    NASA Astrophysics Data System (ADS)

    Cabello, Adán

    2016-03-01

    Contextuality provides a unifying paradigm for nonclassical aspects of quantum probabilities and resources of quantum information. Unfortunately, most forms of quantum contextuality remain experimentally unexplored due to the difficulty of performing sequences of projective measurements on individual quantum systems. Here we show that two-point correlations between binary compatible observables are sufficient to reveal any form of contextuality. This allows us to design simple experiments that are more robust against imperfections and easier to analyze, thus opening the door for observing interesting forms of contextuality, including those requiring quantum systems of high dimensions. In addition, it allows us to connect contextuality to communication complexity scenarios and reformulate a recent result relating contextuality and quantum computation.

  18. Kraus Operators for a Pair of Interacting Qubits: a Case Study

    NASA Astrophysics Data System (ADS)

    Arsenijević, M.; Jeknić-Dugić, J.; Dugić, M.

    2018-06-01

    The Kraus form of the completely positive dynamical maps is appealing from the mathematical and the point of the diverse applications of the open quantum systems theory. Unfortunately, the Kraus operators are poorly known for the two-qubit processes. In this paper, we derive the Kraus operators for a pair of interacting qubits, while the strength of the interaction is arbitrary. One of the qubits is subjected to the x-projection spin measurement. The obtained results are applied to calculate the dynamics of the entanglement in the qubits system. We obtain the loss of the correlations in the finite time interval; the stronger the inter-qubit interaction, the longer lasting entanglement in the system.

  19. The cluster-cluster correlation function. [of galaxies

    NASA Technical Reports Server (NTRS)

    Postman, M.; Geller, M. J.; Huchra, J. P.

    1986-01-01

    The clustering properties of the Abell and Zwicky cluster catalogs are studied using the two-point angular and spatial correlation functions. The catalogs are divided into eight subsamples to determine the dependence of the correlation function on distance, richness, and the method of cluster identification. It is found that the Corona Borealis supercluster contributes significant power to the spatial correlation function to the Abell cluster sample with distance class of four or less. The distance-limited catalog of 152 Abell clusters, which is not greatly affected by a single system, has a spatial correlation function consistent with the power law Xi(r) = 300r exp -1.8. In both the distance class four or less and distance-limited samples the signal in the spatial correlation function is a power law detectable out to 60/h Mpc. The amplitude of Xi(r) for clusters of richness class two is about three times that for richness class one clusters. The two-point spatial correlation function is sensitive to the use of estimated redshifts.

  20. Hexagonalization of correlation functions II: two-particle contributions

    NASA Astrophysics Data System (ADS)

    Fleury, Thiago; Komatsu, Shota

    2018-02-01

    In this work, we compute one-loop planar five-point functions in N=4 super-Yang-Mills using integrability. As in the previous work, we decompose the correlation functions into hexagon form factors and glue them using the weight factors which depend on the cross-ratios. The main new ingredient in the computation, as compared to the four-point functions studied in the previous paper, is the two-particle mirror contribution. We develop techniques to evaluate it and find agreement with the perturbative results in all the cases we analyzed. In addition, we consider next-to-extremal four-point functions, which are known to be protected, and show that the sum of one-particle and two-particle contributions at one loop adds up to zero as expected. The tools developed in this work would be useful for computing higher-particle contributions which would be relevant for more complicated quantities such as higher-loop corrections and non-planar correlators.

  1. Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames

    NASA Astrophysics Data System (ADS)

    Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz

    2017-11-01

    The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.

  2. Research notes : two-rail steel-backed timber guardrail : Crown Point Highway , Multnomah Country , Oregon.

    DOT National Transportation Integrated Search

    1995-05-01

    The Oregon Department of Transportation (ODOT) installed a two-rail steel-backed timber guardrail along a section of the Historic Columbia River Highway, formerly known as the Crown Point Highway, in March 1992 as an experimental features project. Th...

  3. Blob-hole correlation model for edge turbulence and comparisons with NSTX gas puff imaging data

    NASA Astrophysics Data System (ADS)

    Myra, J. R.; Zweben, S. J.; Russell, D. A.

    2018-07-01

    Gas puff imaging (GPI) observations made in NSTX (Zweben et al 2017 Phys. Plasmas 24 102509) have revealed two-point spatial correlations of edge and scrape-off layer (SOL) turbulence in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this paper, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlation patterns that are qualitatively similar to the GPI data in several respects. Effects of the reference location (confined surfaces or SOL), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored and discussed with respect to experimental observations. Additional analysis of the experimental GPI dataset is performed to further test this blob-hole correlation model. A time delay two-point spatial correlation study did not reveal inward propagation of the negative correlation structures that were postulated to correspond to holes in the data nor did it suggest that the negative correlation structures are due to neutral shadowing. However, tracking of the highest and lowest values (extrema) of the normalized GPI fluctuations shows strong evidence for mean inward propagation of minima and outward propagation of maxima, in qualitative agreement with theoretical expectations. Other properties of the experimentally observed extrema are discussed.

  4. Evolution of colour-dependence of galaxy clustering up to z˜ 1.2 based on the data from the VVDS-Wide survey

    NASA Astrophysics Data System (ADS)

    Świetoń, Agnieszka; Pollo, Agnieszka; VVDS Team

    2014-12-01

    We discuss the dependence of galaxy clustering according to their colours up to z˜ 1.2. For that purpose we used one of the wide fields (F22) from the VIMOS-VLT Deep Survey (VVDS). For galaxies with absolute luminosities close to the characteristic Schechter luminosities M^* at a given redshift, we measured the projected two-point correlation function w_{p}(r_{p}) and we estimated the best-fit parameters for a single power-law model: ξ(r) = (r/r_0)^{-γ} , where r_0 is the correlation length and γ is the slope of correlation function. Our results show that red galaxies exhibit the strongest clustering in all epochs up to z˜ 1.2. Green valley represents the "intermediate" population and blue cloud shows the weakest clustering strength. We also compared the shape of w_p(r_p) for different galaxy populations. All three populations have different clustering properties on the small scales, similarly to the behaviour observed in the local catalogues.

  5. Tools for automated acoustic monitoring within the R package monitoR

    USGS Publications Warehouse

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    The R package monitoR contains tools for managing an acoustic-monitoring program including survey metadata, template creation and manipulation, automated detection and results management. These tools are scalable for use with small projects as well as larger long-term projects and those with expansive spatial extents. Here, we describe typical workflow when using the tools in monitoR. Typical workflow utilizes a generic sequence of functions, with the option for either binary point matching or spectrogram cross-correlation detectors.

  6. Nonisotropic turbulence: A turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Liu, Kunlun

    2005-11-01

    The probability density function (PDF) and the two-point correlations of a flat-plate turbulent boundary layer subjected to the zero pressure gradient have been calculated by the direct numerical simulation. It is known that the strong shear force near the wall will deform the vortices and develop some stretched coherent structures like streaks and hairpins, which eventually cause the nonisotropy of wall shear flows. The PDF and the two-point correlations of isotropic flows have been studied for a long time. However, our knowledge about the influence of shear force on the PDF and two-point correlations is still very limited. This study is intended to investigate such influence by using a numerical simulation. Results are presented for a case having a Mach number of M=0.1 and a Reynolds number 2000, based on displacement thickness. The results indicate that the PDF of the streamwise velocity is Lognormal, the PDF of normal velocity is approximately Cauchy, and the PDF of the spanwise velocity is nearly Gaussian. The mean and variance of those PDFs vary according to the distance from the wall. And the two-point correlations are homogenous in the spanwise direction, have a slightly variation in the streamwise direction, but change a lot in the normal direction. Rww or Rvv can be represented as elliptic balls. And the well-chosen normalized system can enable Rww and Rvv to be self-similar.

  7. A closer look at the concept of regional clocks for Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Weber, Robert; Karabatic, Ana; Thaler, Gottfried; Abart, Christoph; Huber, Katrin

    2010-05-01

    Under the precondition of at least two successfully tracked signals at different carrier frequencies we may obtain their ionosphere free linear combination. By introducing approximate values for geometric effects like orbits and tropospheric delay as well as an initial bias parameter N per individual satellite we can solve for the satellite clock with respect to the receiver clock. Noting, that residual effects like orbit errors, remaining tropospheric delays and a residual bias parameter map into these parameters, this procedure leaves us with a kind of virtual clock differences. These clocks cover regional effects and are therefore clearly correlated with clocks at nearby station. Therefore we call these clock differences, which are clearly different from clock solutions provided for instance by IGS, the "regional clocks". When introducing the regional clocks obtained from real-time data of a GNSS reference station network we are able to process the coordinates of a nearby isolated station via a PPP .In terms of PPP-convergence time which will be reduced down to 30 minutes or less, this procedure is clearly favorable. The accuracy is quite comparable with state of the art PPP procedures. Nevertheless, this approach cannot compete in fixing times with double-difference approaches but the correlation holds over hundreds of kilometers distance to our master station and the clock differences can easily by obtained, even in real-time. This presentation provides preliminary results of the project RA-PPP. RA-PPP is a research project financed by the Federal Ministry for Transport, Innovation and Technology, managed by the Austrian Research Promotion Agency (FFG) in the course of the 6th call of the Austrian Space Application Program (ASAP). RA-PPP stands for Rapid Precise Point Positioning, which denotes the wish for faster and more accurate algorithms for PPP. The concept of regional clocks which will be demonstrated in detail in this presentation is one out of 4 concepts to be evaluated in this project.

  8. Intrinsic alignments of galaxies in the MassiveBlack-II simulation: Analysis of two-point statistics

    DOE PAGES

    Tenneti, Ananth; Singh, Sukhdeep; Mandelbaum, Rachel; ...

    2015-03-11

    The intrinsic alignment of galaxies with the large-scale density field in an important astrophysical contaminant in upcoming weak lensing surveys. We present detailed measurements of the galaxy intrinsic alignments and associated ellipticity-direction (ED) and projected shape (w g₊) correlation functions for galaxies in the cosmological hydrodynamic MassiveBlack-II (MB-II) simulation. We carefully assess the effects on galaxy shapes, misalignment of the stellar component with the dark matter shape and two-point statistics of iterative weighted (by mass and luminosity) definitions of the (reduced and unreduced) inertia tensor. We find that iterative procedures must be adopted for a reliable measurement of the reducedmore » tensor but that luminosity versus mass weighting has only negligible effects. Both ED and w g₊ correlations increase in amplitude with subhalo mass (in the range of 10¹⁰ – 6.0 X 10¹⁴h⁻¹ M ⊙), with a weak redshift dependence (from z = 1 to z = 0.06) at fixed mass. At z ~ 0.3, we predict a w g₊ that is in reasonable agreement with SDSS LRG measurements and that decreases in amplitude by a factor of ~ 5–18 for galaxies in the LSST survey. We also compared the intrinsic alignment of centrals and satellites, with clear detection of satellite radial alignments within the host halos. Finally, we show that w g₊ (using subhalos as tracers of density and w δ (using dark matter density) predictions from the simulations agree with that of non-linear alignment models (NLA) at scales where the 2-halo term dominates in the correlations (and tabulate associated NLA fitting parameters). The 1-halo term induces a scale dependent bias at small scales which is not modeled in the NLA model.« less

  9. Toward frameless stereotaxy: anatomical-vascular correlation and registration

    NASA Astrophysics Data System (ADS)

    Henri, Christopher J.; Cukiert, A.; Collins, D. Louis; Olivier, A.; Peters, Terence M.

    1992-09-01

    We present a method to correlate and register a projection angiogram with volume rendered tomographic data from the same patient. Previously, we have described how this may be accomplished using a stereotactic frame to handle the required coordinate transformations. Here we examine the efficacy of employing anatomically based landmarks as opposed to external fiducials to achieve the same results. The experiments required a neurosurgeon to identify several homologous points in a DSA image and a MRI volume which were subsequently used to compute the coordinate transformations governing the matching procedure. Correlation accuracy was assessed by comparing these results to those employing fiducial markers on a stereotactic frame, and by examining how different levels of noise in the positions of the homologous points affect the resulting coordinate transformations. Further simulations suggest that this method has potential to be used in planning stereotactic procedures without the use of a frame.

  10. Prediction and Warning of Transported Turbulence in Long-Haul Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Ellrod, Gary P. (Inventor); Spence, Mark D. (Inventor); Shipley, Scott T. (Inventor)

    2017-01-01

    An aviation flight planning system is used for predicting and warning for intersection of flight paths with transported meteorological disturbances, such as transported turbulence and related phenomena. Sensed data and transmitted data provide real time and forecast data related to meteorological conditions. Data modelling transported meteorological disturbances are applied to the received transmitted data and the sensed data to use the data modelling transported meteorological disturbances to correlate the sensed data and received transmitted data. The correlation is used to identify transported meteorological disturbances source characteristics, and identify predicted transported meteorological disturbances trajectories from source to intersection with flight path in space and time. The correlated data are provided to a visualization system that projects coordinates of a point of interest (POI) in a selected point of view (POV) to displays the flight track and the predicted transported meteorological disturbances warnings for the flight crew.

  11. [Spatial point patterns of Antarctic krill fishery in the northern Antarctic Peninsula].

    PubMed

    Yang, Xiao Ming; Li, Yi Xin; Zhu, Guo Ping

    2016-12-01

    As a key species in the Antarctic ecosystem, the spatial distribution of Antarctic krill (thereafter krill) often tends to present aggregation characteristics, which therefore reflects the spatial patterns of krill fishing operation. Based on the fishing data collected from Chinese krill fishing vessels, of which vessel A was professional krill fishing vessel and Vessel B was a fishing vessel which shifted between Chilean jack mackerel (Trachurus murphyi) fishing ground and krill fishing ground. In order to explore the characteristics of spatial distribution pattern and their ecological effects of two obvious different fishing fleets under a high and low nominal catch per unit effort (CPUE), from the viewpoint of spatial point pattern, the present study analyzed the spatial distribution characteristics of krill fishery in the northern Antarctic Peninsula from three aspects: (1) the two vessels' point pattern characteristics of higher CPUEs and lower CPUEs at different scales; (2) correlation of the bivariate point patterns between these points of higher CPUE and lower CPUE; and (3) correlation patterns of CPUE. Under the analysis derived from the Ripley's L function and mark correlation function, the results showed that the point patterns of the higher/lo-wer catch available were similar, both showing an aggregation distribution in this study windows at all scale levels. The aggregation intensity of krill fishing was nearly maximum at 15 km spatial scale, and kept stably higher values at the scale of 15-50 km. The aggregation intensity of krill fishery point patterns could be described in order as higher CPUE of vessel A > lower CPUE of vessel B >higher CPUE of vessel B > higher CPUE of vessel B. The relationship of the higher and lo-wer CPUEs of vessel A showed positive correlation at the spatial scale of 0-75 km, and presented stochastic relationship after 75 km scale, whereas vessel B showed positive correlation at all spatial scales. The point events of higher and lower CPUEs were synchronized, showing significant correlations at most of spatial scales because of the dynamics nature and complex of krill aggregation patterns. The distribution of vessel A's CPUEs was positively correlated at scales of 0-44 km, but negatively correlated at the scales of 44-80 km. The distribution of vessel B's CPUEs was negatively correlated at the scales of 50-70 km, but no significant correlations were found at other scales. The CPUE mark point patterns showed a negative correlation, which indicated that intraspecific competition for space and prey was significant. There were significant differences in spatial point pattern distribution between vessel A with higher fishing capacity and vessel B with lower fishing capacity. The results showed that the professional krill fishing vessel is suitable to conduct the analysis of spatial point pattern and scientific fishery survey.

  12. A similarity hypothesis for the two-point correlation tensor in a temporally evolving plane wake

    NASA Technical Reports Server (NTRS)

    Ewing, D. W.; George, W. K.; Moser, R. D.; Rogers, M. M.

    1995-01-01

    The analysis demonstrated that the governing equations for the two-point velocity correlation tensor in the temporally evolving wake admit similarity solutions, which include the similarity solutions for the single-point moment as a special case. The resulting equations for the similarity solutions include two constants, beta and Re(sub sigma), that are ratios of three characteristic time scales of processes in the flow: a viscous time scale, a time scale characteristic of the spread rate of the flow, and a characteristic time scale of the mean strain rate. The values of these ratios depend on the initial conditions of the flow and are most likely measures of the coherent structures in the initial conditions. The occurrences of these constants in the governing equations for the similarity solutions indicates that these solutions, in general, will only be the same for two flows if these two constants are equal (and hence the coherent structures in the flows are related). The comparisons between the predictions of the similarity hypothesis and the data presented here and elsewhere indicate that the similarity solutions for the two-point correlation tensors provide a good approximation of the measures of those motions that are not significantly affected by the boundary conditions caused by the finite extent of real flows. Thus, the two-point similarity hypothesis provides a useful tool for both numerical and physical experimentalist that can be used to examine how the finite extent of real flows affect the evolution of the different scales of motion in the flow.

  13. Accuracy of Orthomosaic Generated by Different Methods in Example of UAV Platform MUST Q

    NASA Astrophysics Data System (ADS)

    Liba, N.; Berg-Jürgens, J.

    2015-11-01

    Development of photogrammetry has reached a new level due to the use of unmanned aerial vehicles (UAV). In Estonia, the main areas of use of UAVs are monitoring overhead power lines for energy companies and fields in agriculture, and estimating the use of stockpile in mining. The project was carried out by the order of the City of Tartu for future road construction. In this research, automation of UAV platform MUST Q aerial image processing and reduction of time spent on the use of ground control points (GCP) is studied. For that two projects were created with software Pix4D. First one was processed automatically without GCP. Second one did use GCP, but all the processing was done automatically. As the result of the project, two orthomosaics with the pixel size of 5 cm were composed. Projects allowed ensuring accuracy limit of three times of the pixel size. The project that turned out to be the most accurate was the one using ground control points to do the levelling, which remained within the error limit allowed and the accuracy of the orthomosaic was 0.132 m. The project that didn't use ground control points had the accuracy of 1.417 m.

  14. Quantum Correlation Properties in Composite Parity-Conserved Matrix Product States

    NASA Astrophysics Data System (ADS)

    Zhu, Jing-Min

    2016-09-01

    We give a new thought for constructing long-range quantum correlation in quantum many-body systems. Our proposed composite parity-conserved matrix product state has long-range quantum correlation only for two spin blocks where their spin-block length larger than 1 compared to any subsystem only having short-range quantum correlation, and we investigate quantum correlation properties of two spin blocks varying with environment parameter and spacing spin number. We also find that the geometry quantum discords of two nearest-neighbor spin blocks and two next-nearest-neighbor spin blocks become smaller and for other conditions the geometry quantum discord becomes larger than that in any subcomponent, i.e., the increase or the production of the long-range quantum correlation is at the cost of reducing the short-range quantum correlation compared to the corresponding classical correlation and total correlation having no any characteristic of regulation. For nearest-neighbor and next-nearest-neighbor all the correlations take their maximal values at the same points, while for other conditions no whether for spacing same spin number or for different spacing spin numbers all the correlations taking their maximal values are respectively at different points which are very close. We believe that our work is helpful to comprehensively and deeply understand the organization and structure of quantum correlation especially for long-range quantum correlation of quantum many-body systems; and further helpful for the classification, the depiction and the measure of quantum correlation of quantum many-body systems.

  15. Sprays and Cartan projective connections

    NASA Astrophysics Data System (ADS)

    Saunders, D. J.

    2004-10-01

    Around 80 years ago, several authors (for instance H. Weyl, T.Y. Thomas, J. Douglas and J.H.C. Whitehead) studied the projective geometry of paths, using the methods of tensor calculus. The principal object of study was a spray, namely a homogeneous second-order differential equation, or more generally a projective equivalence class of sprays. At around the same time, E. Cartan studied the same topic from a different point of view, by imagining a projective space attached to a manifold, or, more generally, attached to a `manifold of elements'; the infinitesimal `glue' may be interpreted in modern language as a Cartan projective connection on a principal bundle. This paper describes the geometrical relationship between these two points of view.

  16. Risky Group Decision-Making Method for Distribution Grid Planning

    NASA Astrophysics Data System (ADS)

    Li, Cunbin; Yuan, Jiahang; Qi, Zhiqiang

    2015-12-01

    With rapid speed on electricity using and increasing in renewable energy, more and more research pay attention on distribution grid planning. For the drawbacks of existing research, this paper proposes a new risky group decision-making method for distribution grid planning. Firstly, a mixing index system with qualitative and quantitative indices is built. On the basis of considering the fuzziness of language evaluation, choose cloud model to realize "quantitative to qualitative" transformation and construct interval numbers decision matrices according to the "3En" principle. An m-dimensional interval numbers decision vector is regarded as super cuboids in m-dimensional attributes space, using two-level orthogonal experiment to arrange points uniformly and dispersedly. The numbers of points are assured by testing numbers of two-level orthogonal arrays and these points compose of distribution points set to stand for decision-making project. In order to eliminate the influence of correlation among indices, Mahalanobis distance is used to calculate the distance from each solutions to others which means that dynamic solutions are viewed as the reference. Secondly, due to the decision-maker's attitude can affect the results, this paper defines the prospect value function based on SNR which is from Mahalanobis-Taguchi system and attains the comprehensive prospect value of each program as well as the order. At last, the validity and reliability of this method is illustrated by examples which prove the method is more valuable and superiority than the other.

  17. Relationship between sensibility and ability to read braille in diabetics.

    PubMed

    Nakada, M; Dellon, A L

    1989-01-01

    Twenty-five vision-impaired diabetics received an evaluation of sensibility. Each subject had received 2 years of instruction in braille reading at the Konan Rehabilitation Center prior to the sensibility testing. Sensibility evaluation consisted of cutaneous pressure threshold measurements with the Semmes-Weinstein monofilament and evaluation of moving and static two-point discrimination with Disk-Criminator. The ability to read braille was graded by the braille-teaching instructors as good, fair, and unable. The results of the evaluation of sensibility demonstrated that the value of the cutaneous pressure threshold did not correlate with the ability to read braille. Moving and static two-point discrimination were found to correlate highly (P less than .001) with the ability to read braille at a level of fair or good. No patient in this study with a moving two-point discrimination value of 4 or more or a static two-point discrimination value of 5 or more was able to read braille even at the fair level of ability.

  18. Voronoi Tessellation for reducing the processing time of correlation functions

    NASA Astrophysics Data System (ADS)

    Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio

    2018-01-01

    The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.

  19. Nonequilibrium magnetic properties in a two-dimensional kinetic mixed Ising system within the effective-field theory and Glauber-type stochastic dynamics approach.

    PubMed

    Ertaş, Mehmet; Deviren, Bayram; Keskin, Mustafa

    2012-11-01

    Nonequilibrium magnetic properties in a two-dimensional kinetic mixed spin-2 and spin-5/2 Ising system in the presence of a time-varying (sinusoidal) magnetic field are studied within the effective-field theory (EFT) with correlations. The time evolution of the system is described by using Glauber-type stochastic dynamics. The dynamic EFT equations are derived by employing the Glauber transition rates for two interpenetrating square lattices. We investigate the time dependence of the magnetizations for different interaction parameter values in order to find the phases in the system. We also study the thermal behavior of the dynamic magnetizations, the hysteresis loop area, and dynamic correlation. The dynamic phase diagrams are presented in the reduced magnetic field amplitude and reduced temperature plane and we observe that the system exhibits dynamic tricritical and reentrant behaviors. Moreover, the system also displays a double critical end point (B), a zero-temperature critical point (Z), a critical end point (E), and a triple point (TP). We also performed a comparison with the mean-field prediction in order to point out the effects of correlations and found that some of the dynamic first-order phase lines, which are artifacts of the mean-field approach, disappeared.

  20. A double-correlation tremor-location method

    NASA Astrophysics Data System (ADS)

    Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur

    2017-02-01

    A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.

  1. 76 FR 54732 - Eleven Point Resource Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ... recommendations to the Forest Service concerning projects and funding consistent with the title II of the Act. The meeting is open to the public. The purpose of the meeting is review proposed forest management projects so... further information. SUPPLEMENTARY INFORMATION: The following business will be conducted: Two projects...

  2. Focus on Adaptation. Final Report.

    ERIC Educational Resources Information Center

    Focus, 1997

    1997-01-01

    A panel of state staff, Professional Development Center directors, and other experts reviewed current or previous exemplary projects in Pennsylvania and in the U.S. and published project descriptions in a newsletter ("FOCUS" bulletin). Twenty-two special projects were selected as exemplary based on a five-point scale for innovation,…

  3. Selective correlations in finite quantum systems and the Desargues property

    NASA Astrophysics Data System (ADS)

    Lei, C.; Vourdas, A.

    2018-06-01

    The Desargues property is well known in the context of projective geometry. An analogous property is presented in the context of both classical and Quantum Physics. In a classical context, the Desargues property implies that two logical circuits with the same input show in their outputs selective correlations. In general their outputs are uncorrelated, but if the output of one has a particular value, then the output of the other has another particular value. In a quantum context, the Desargues property implies that two experiments each of which involves two successive projective measurements have selective correlations. For a particular set of projectors, if in one experiment the second measurement does not change the output of the first measurement, then the same is true in the other experiment.

  4. POLARBEAR constraints on cosmic birefringence and primordial magnetic fields

    DOE PAGES

    Ade, Peter A. R.; Arnold, Kam; Atlas, Matt; ...

    2015-12-08

    Here, we constrain anisotropic cosmic birefringence using four-point correlations of even-parity E-mode and odd-parity B-mode polarization in the cosmic microwave background measurements made by the POLARization of the Background Radiation (POLARBEAR) experiment in its first season of observations. We find that the anisotropic cosmic birefringence signal from any parity-violating processes is consistent with zero. The Faraday rotation from anisotropic cosmic birefringence can be compared with the equivalent quantity generated by primordial magnetic fields if they existed. The POLARBEAR nondetection translates into a 95% confidence level (C.L.) upper limit of 93 nanogauss (nG) on the amplitude of an equivalent primordial magneticmore » field inclusive of systematic uncertainties. This four-point correlation constraint on Faraday rotation is about 15 times tighter than the upper limit of 1380 nG inferred from constraining the contribution of Faraday rotation to two-point correlations of B-modes measured by Planck in 2015. Metric perturbations sourced by primordial magnetic fields would also contribute to the B-mode power spectrum. Using the POLARBEAR measurements of the B-mode power spectrum (two-point correlation), we set a 95% C.L. upper limit of 3.9 nG on primordial magnetic fields assuming a flat prior on the field amplitude. This limit is comparable to what was found in the Planck 2015 two-point correlation analysis with both temperature and polarization. Finally, we perform a set of systematic error tests and find no evidence for contamination. This work marks the first time that anisotropic cosmic birefringence or primordial magnetic fields have been constrained from the ground at subdegree scales.« less

  5. Semiautomated skeletonization of the pulmonary arterial tree in micro-CT images

    NASA Astrophysics Data System (ADS)

    Hanger, Christopher C.; Haworth, Steven T.; Molthen, Robert C.; Dawson, Christopher A.

    2001-05-01

    We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel's axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized.

  6. Characterization of cancer stem cell properties of CD24 and CD26-positive human malignant mesothelioma cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Hiroto; Naito, Motohiko; Ghani, Farhana Ishrat

    2012-03-16

    Highlights: Black-Right-Pointing-Pointer We focused on CD24 and CD26 for further analysis of CSC properties in MM. Black-Right-Pointing-Pointer Their expressions were correlated with chemoresistance, cell growth, and invasion. Black-Right-Pointing-Pointer Their expressions were also correlated with several cancer related genes. Black-Right-Pointing-Pointer The expression of each marker was correlated with different CSC property in Meso1. Black-Right-Pointing-Pointer Phosphorylation of ERK by EGF was regulated by expression of CD26, but not CD24. -- Abstract: Malignant mesothelioma (MM) is an asbestos-related malignancy characterized by rapid growth and poor prognosis. In our previous study, we have demonstrated that several cancer stem cell (CSC) markers correlated with CSCmore » properties in MM cells. Among these markers, we focused on two: CD24, the common CSC marker, and CD26, the additional CSC marker. We further analyzed the CSC properties of CD24 and CD26-positve MM cells. We established RNAi-knockdown cells and found that these markers were significantly correlated with chemoresistance, proliferation, and invasion potentials in vitro. Interestingly, while Meso-1 cells expressed both CD24 and CD26, the presence of each of these two markers was correlated with different CSC property. In addition, downstream signaling of these markers was explored by microarray analysis, which revealed that their expressions were correlated with several cancer-related genes. Furthermore, phosphorylation of ERK by EGF stimulation was significantly affected by the expression of CD26, but not CD24. These results suggest that CD24 and CD26 differentially regulate the CSC potentials of MM and could be promising targets for CSC-oriented therapy.« less

  7. Optical correlator method and apparatus for particle image velocimetry processing

    NASA Technical Reports Server (NTRS)

    Farrell, Patrick V. (Inventor)

    1991-01-01

    Young's fringes are produced from a double exposure image of particles in a flowing fluid by passing laser light through the film and projecting the light onto a screen. A video camera receives the image from the screen and controls a spatial light modulator. The spatial modulator has a two dimensional array of cells the transmissiveness of which are controlled in relation to the brightness of the corresponding pixel of the video camera image of the screen. A collimated beam of laser light is passed through the spatial light modulator to produce a diffraction pattern which is focused onto another video camera, with the output of the camera being digitized and provided to a microcomputer. The diffraction pattern formed when the laser light is passed through the spatial light modulator and is focused to a point corresponds to the two dimensional Fourier transform of the Young's fringe pattern projected onto the screen. The data obtained fro This invention was made with U.S. Government support awarded by the Department of the Army (DOD) and NASA grand number(s): DOD #DAAL03-86-K0174 and NASA #NAG3-718. The U.S. Government has certain rights in this invention.

  8. Spin-orbital quantum liquid on the honeycomb lattice

    NASA Astrophysics Data System (ADS)

    Corboz, Philippe

    2013-03-01

    The symmetric Kugel-Khomskii can be seen as a minimal model describing the interactions between spin and orbital degrees of freedom in transition-metal oxides with orbital degeneracy, and it is equivalent to the SU(4) Heisenberg model of four-color fermionic atoms. We present simulation results for this model on various two-dimensional lattices obtained with infinite projected-entangled pair states (iPEPS), an efficient variational tensor-network ansatz for two dimensional wave functions in the thermodynamic limit. This approach can be seen as a two-dimensional generalization of matrix product states - the underlying ansatz of the density matrix renormalization group method. We find a rich variety of exotic phases: while on the square and checkerboard lattices the ground state exhibits dimer-Néel order and plaquette order, respectively, quantum fluctuations on the honeycomb lattice destroy any order, giving rise to a spin-orbital liquid. Our results are supported from flavor-wave theory and exact diagonalization. Furthermore, the properties of the spin-orbital liquid state on the honeycomb lattice are accurately accounted for by a projected variational wave-function based on the pi-flux state of fermions on the honeycomb lattice at 1/4-filling. In that state, correlations are algebraic because of the presence of a Dirac point at the Fermi level, suggesting that the ground state is an algebraic spin-orbital liquid. This model provides a good starting point to understand the recently discovered spin-orbital liquid behavior of Ba3CuSb2O9. The present results also suggest to choose optical lattices with honeycomb geometry in the search for quantum liquids in ultra-cold four-color fermionic atoms. We acknowledge the financial support from the Swiss National Science Foundation.

  9. The mandibular symphysis as a starting point for the occlusal-level reconstruction of panfacial fractures with bicondylar fractures and interruption of the maxillary and mandibular arches: report of two cases.

    PubMed

    Pau, Mauro; Reinbacher, Knut Ernst; Feichtinger, Matthias; Navysany, Kawe; Kärcher, Hans

    2014-06-01

    Panfacial fractures represent a challenge, even for experienced maxillofacial surgeons, because all references for reconstructing the facial skeleton are missing. Logical reconstructive sequencing based on a clear understanding of the correlation between projection and the widths and lengths of facial subunits should enable the surgeon to achieve correct realignment of the bony framework of the face and to prevent late deformity and functional impairment. Reconstruction is particularly challenging in patients presenting with concomitant fractures at the Le Fort I level and affecting the palate, condyles, and mandibular symphysis. In cases without bony loss and sufficient dentition, we believe that accurate fixation of the mandibular symphysis can represent the starting point of a reconstructive sequence that allows successful reconstruction at the Le Fort I level. Two patients were treated in our department by reconstruction starting in the occlusal area through repair of the mandibular symphysis. Both patients considered the postoperative facial shape and profile to be satisfactory and comparable to the pre-injury situation. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. Contour-time approach to the Bose-Hubbard model in the strong coupling regime: Studying two-point spatio-temporal correlations at the Hartree-Fock-Bogoliubov level

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, Matthew R. C.; Kennett, Malcolm P.

    2018-05-01

    We develop a formalism that allows the study of correlations in space and time in both the superfluid and Mott insulating phases of the Bose-Hubbard Model. Specifically, we obtain a two particle irreducible effective action within the contour-time formalism that allows for both equilibrium and out of equilibrium phenomena. We derive equations of motion for both the superfluid order parameter and two-point correlation functions. To assess the accuracy of this formalism, we study the equilibrium solution of the equations of motion and compare our results to existing strong coupling methods as well as exact methods where possible. We discuss applications of this formalism to out of equilibrium situations.

  11. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    PubMed Central

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  12. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  13. The impact of science notebook writing on ELL and low-SES students' science language development and conceptual understanding

    NASA Astrophysics Data System (ADS)

    Huerta, Margarita

    This quantitative study explored the impact of literacy integration in a science inquiry classroom involving the use of science notebooks on the academic language development and conceptual understanding of students from diverse (i.e., English Language Learners, or ELLs) and low socio-economic status (low-SES) backgrounds. The study derived from a randomized, longitudinal, field-based NSF funded research project (NSF Award No. DRL - 0822343) targeting ELL and non-ELL students from low-SES backgrounds in a large urban school district in Southeast Texas. The study used a scoring rubric (modified and tested for validity and reliability) to analyze fifth-grade school students' science notebook entries. Scores for academic language quality (or, for brevity, language ) were used to compare language growth over time across three time points (i.e., beginning, middle, and end of the school year) and to compare students across categories (ELL, former ELL, non-ELL, and gender) using descriptive statistics and mixed between-within subjects analysis of variance (ANOVA). Scores for conceptual understanding (or, for brevity, concept) were used to compare students across categories (ELL, former ELL, non-ELL, and gender) in three domains using descriptive statistics and ANOVA. A correlational analysis was conducted to explore the relationship, if any, between language scores and concept scores for each group. Students demonstrated statistically significant growth over time in their academic language as reflected by science notebook scores. While ELL students scored lower than former ELL and non-ELL students at the first two time points, they caught up to their peers by the third time point. Similarly, females outperformed males in language scores in the first two time points, but males caught up to females in the third time point. In analyzing conceptual scores, ELLs had statistically significant lower scores than former-ELL and non-ELL students, and females outperformed males in the first two domains. These differences, however, were not statistically significant in the last domain. Last, correlations between language and concept scores were overall, positive, large, and significant across domains and groups. The study presents a rubric useful for quantifying diverse students' science notebook entries, and findings add to the sparse research on the impact of writing in diverse students' language development and conceptual understanding in science.

  14. Accelerating the two-point and three-point galaxy correlation functions using Fourier transforms

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2016-01-01

    Though Fourier transforms (FTs) are a common technique for finding correlation functions, they are not typically used in computations of the anisotropy of the two-point correlation function (2PCF) about the line of sight in wide-angle surveys because the line-of-sight direction is not constant on the Cartesian grid. Here we show how FTs can be used to compute the multipole moments of the anisotropic 2PCF. We also show how FTs can be used to accelerate the 3PCF algorithm of Slepian & Eisenstein. In both cases, these FT methods allow one to avoid the computational cost of pair counting, which scales as the square of the number density of objects in the survey. With the upcoming large data sets of Dark Energy Spectroscopic Instrument, Euclid, and Large Synoptic Survey Telescope, FT techniques will therefore offer an important complement to simple pair or triplet counts.

  15. Blob-hole correlation model for edge turbulence and comparisons with NSTX gas puff imaging data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myra, J. R.; Zweben, S. J.; Russell, D. A.

    We report that gas puff imaging (GPI) observations made in NSTX [Zweben S J, et al., 2017 Phys. Plasmas 24 102509] have revealed two-point spatial correlations of edge and scrape-off layer turbulence in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this paper, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlationmore » patterns that are qualitatively similar to the GPI data in several respects. Effects of the reference location (confined surfaces or scrape-off layer), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored and discussed with respect to experimental observations. Additional analysis of the experimental GPI dataset is performed to further test this blob-hole correlation model. A time delay two-point spatial correlation study did not reveal inward propagation of the negative correlation structures that were postulated to correspond to holes in the data nor did it suggest that the negative correlation structures are due to neutral shadowing. However, tracking of the highest and lowest values (extrema) of the normalized GPI fluctuations shows strong evidence for mean inward propagation of minima and outward propagation of maxima, in qualitative agreement with theoretical expectations. Finally, other properties of the experimentally observed extrema are discussed.« less

  16. Blob-hole correlation model for edge turbulence and comparisons with NSTX gas puff imaging data

    DOE PAGES

    Myra, J. R.; Zweben, S. J.; Russell, D. A.

    2018-05-15

    We report that gas puff imaging (GPI) observations made in NSTX [Zweben S J, et al., 2017 Phys. Plasmas 24 102509] have revealed two-point spatial correlations of edge and scrape-off layer turbulence in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this paper, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlationmore » patterns that are qualitatively similar to the GPI data in several respects. Effects of the reference location (confined surfaces or scrape-off layer), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored and discussed with respect to experimental observations. Additional analysis of the experimental GPI dataset is performed to further test this blob-hole correlation model. A time delay two-point spatial correlation study did not reveal inward propagation of the negative correlation structures that were postulated to correspond to holes in the data nor did it suggest that the negative correlation structures are due to neutral shadowing. However, tracking of the highest and lowest values (extrema) of the normalized GPI fluctuations shows strong evidence for mean inward propagation of minima and outward propagation of maxima, in qualitative agreement with theoretical expectations. Finally, other properties of the experimentally observed extrema are discussed.« less

  17. CCD correlation techniques

    NASA Technical Reports Server (NTRS)

    Hewes, C. R.; Bosshart, P. W.; Eversole, W. L.; Dewit, M.; Buss, D. D.

    1976-01-01

    Two CCD techniques were discussed for performing an N-point sampled data correlation between an input signal and an electronically programmable reference function. The design and experimental performance of an implementation of the direct time correlator utilizing two analog CCDs and MOS multipliers on a single IC were evaluated. The performance of a CCD implementation of the chirp z transform was described, and the design of a new CCD integrated circuit for performing correlation by multiplication in the frequency domain was presented. This chip provides a discrete Fourier transform (DFT) or inverse DFT, multipliers, and complete support circuitry for the CCD CZT. The two correlation techniques are compared.

  18. Revising two-point discrimination assessment in normal aging and in patients with polyneuropathies.

    PubMed

    van Nes, S I; Faber, C G; Hamers, R M T P; Harschnitz, O; Bakkers, M; Hermans, M C E; Meijer, R J; van Doorn, P A; Merkies, I S J

    2008-07-01

    To revise the static and dynamic normative values for the two-point discrimination test and to examine its applicability and validity in patients with a polyneuropathy. Two-point discrimination threshold values were assessed in 427 healthy controls and 99 patients mildly affected by a polyneuropathy. The controls were divided into seven age groups ranging from 20-29, 30-39,..., up to 80 years and older; each group consisted of at least 30 men and 30 women. Two-point discrimination examination took place under standardised conditions on the index finger. Correlation studies were performed between the scores obtained and the values derived from the Weinstein Enhanced Sensory Test (WEST) and the arm grade of the Overall Disability SumScore (ODSS) in the patients' group (validity studies). Finally, the sensitivity to detect patients mildly affected by a polyneuropathy was evaluated for static and dynamic assessments. There was a significant age-dependent increase in the two-point discrimination values. No significant gender difference was found. The dynamic threshold values were lower than the static scores. The two-point discrimination values obtained correlated significantly with the arm grade of the ODSS (static values: r = 0.33, p = 0.04; dynamic values: r = 0.37, p = 0.02) and the scores of the WEST in patients (static values: r = 0.58, p = 0.0001; dynamic values: r = 0.55, p = 0.0002). The sensitivity for the static and dynamic threshold values was 28% and 33%, respectively. This study provides age-related normative two-point discrimination threshold values using a two-point discriminator (an aesthesiometer). This easily applicable instrument could be used as part of a more extensive neurological sensory evaluation.

  19. Adults Are More Likely To Become Eligible For Medicaid During Future Recessions If Their State Expanded Medicaid.

    PubMed

    Jacobs, Paul D; Hill, Steven C; Abdus, Salam

    2017-01-01

    Eligibility for and enrollment in Medicaid can vary with economic recessions, recoveries, and changes in personal income. Understanding how Medicaid responds to such forces is important to budget analysts and policy makers tasked with forecasting Medicaid enrollment. We simulated eligibility for Medicaid for the period 2005-14 in two scenarios: assuming that each state's eligibility rules in 2009, the year before passage of the Affordable Care Act (ACA), were in place during the entire study period; and assuming that the ACA's expanded eligibility rules were in place during the entire period for all states. Then we correlated the results with unemployment rates as a measure of the economy. Each percentage-point increase in the unemployment rate was associated with an increase in the share of people eligible for Medicaid of 0.32 percentage point under the 2009 eligibility rules and 0.77 percentage point under the ACA rules. Our simulations showed that the ACA expansion increased Medicaid's responsiveness to changes in unemployment. For states that have not expanded Medicaid eligibility, our analysis demonstrates that increased responsiveness to periods of high unemployment is one benefit of expansion. Project HOPE—The People-to-People Health Foundation, Inc.

  20. The method for homography estimation between two planes based on lines and points

    NASA Astrophysics Data System (ADS)

    Shemiakina, Julia; Zhukovsky, Alexander; Nikolaev, Dmitry

    2018-04-01

    The paper considers the problem of estimating a transform connecting two images of one plane object. The method based on RANSAC is proposed for calculating the parameters of projective transform which uses points and lines correspondences simultaneously. A series of experiments was performed on synthesized data. Presented results show that the algorithm convergence rate is significantly higher when actual lines are used instead of points of lines intersection. When using both lines and feature points it is shown that the convergence rate does not depend on the ratio between lines and feature points in the input dataset.

  1. Online Peer Observation: An Exploration of a Cross-Discipline Observation Project

    ERIC Educational Resources Information Center

    Nicolson, Margaret; Harper, Felicity

    2014-01-01

    In this article the authors compare two phases of an ongoing, annual online peer observation project at the Open University. Adopting a non-managerialist approach, the project aims to give teachers a renewed sense of collegiality, allowing them to take responsibility for aspects of their professional development and share practice points. While…

  2. Hierarchical clustering of EMD based interest points for road sign detection

    NASA Astrophysics Data System (ADS)

    Khan, Jesmin; Bhuiyan, Sharif; Adhami, Reza

    2014-04-01

    This paper presents an automatic road traffic signs detection and recognition system based on hierarchical clustering of interest points and joint transform correlation. The proposed algorithm consists of the three following stages: interest points detection, clustering of those points and similarity search. At the first stage, good discriminative, rotation and scale invariant interest points are selected from the image edges based on the 1-D empirical mode decomposition (EMD). We propose a two-step unsupervised clustering technique, which is adaptive and based on two criterion. In this context, the detected points are initially clustered based on the stable local features related to the brightness and color, which are extracted using Gabor filter. Then points belonging to each partition are reclustered depending on the dispersion of the points in the initial cluster using position feature. This two-step hierarchical clustering yields the possible candidate road signs or the region of interests (ROIs). Finally, a fringe-adjusted joint transform correlation (JTC) technique is used for matching the unknown signs with the existing known reference road signs stored in the database. The presented framework provides a novel way to detect a road sign from the natural scenes and the results demonstrate the efficacy of the proposed technique, which yields a very low false hit rate.

  3. Drought and host selection influence microbial community dynamics in the grass root microbiome

    USDA-ARS?s Scientific Manuscript database

    Through 16S rRNA gene profiling across two distinct watering regimes and two developmental time points, we demonstrate that there is a strong correlation between host phylogenetic distance and the microbiome dissimilarity within root tissues, and that drought weakens this correlation by inducing con...

  4. [Tobacco quality analysis of industrial classification of different years using near-infrared (NIR) spectrum].

    PubMed

    Wang, Yi; Xiang, Ma; Wen, Ya-Dong; Yu, Chun-Xia; Wang, Luo-Ping; Zhao, Long-Lian; Li, Jun-Hui

    2012-11-01

    In this study, tobacco quality analysis of main Industrial classification of different years was carried out applying spectrum projection and correlation methods. The group of data was near-infrared (NIR) spectrum from Hongta Tobacco (Group) Co., Ltd. 5730 tobacco leaf Industrial classification samples from Yuxi in Yunnan province from 2007 to 2010 year were collected using near infrared spectroscopy, which from different parts and colors and all belong to tobacco varieties of HONGDA. The conclusion showed that, when the samples were divided to two part by the ratio of 2:1 randomly as analysis and verification sets in the same year, the verification set corresponded with the analysis set applying spectrum projection because their correlation coefficients were above 0.98. The correlation coefficients between two different years applying spectrum projection were above 0.97. The highest correlation coefficient was the one between 2008 and 2009 year and the lowest correlation coefficient was the one between 2007 and 2010 year. At the same time, The study discussed a method to get the quantitative similarity values of different industrial classification samples. The similarity and consistency values were instructive in combination and replacement of tobacco leaf blending.

  5. Methodology for quantification of waste generated in Spanish railway construction works

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman Baez, Ana de; Villoria Saez, Paola; Rio Merino, Mercedes del

    Highlights: Black-Right-Pointing-Pointer Two equations for C and D waste estimation in railway construction works are developed. Black-Right-Pointing-Pointer Mixed C and D waste is the most generated category during railway construction works. Black-Right-Pointing-Pointer Tunnel construction is essential to quantify the waste generated during the works. Black-Right-Pointing-Pointer There is a relationship between C and D waste generated and railway functional units. Black-Right-Pointing-Pointer The methodology proposed can be used to obtain new constants for other areas. - Abstract: In the last years, the European Union (EU) has been focused on the reduction of construction and demolition (C and D) waste. Specifically, in 2006,more » Spain generated roughly 47 million tons of C and D waste, of which only 13.6% was recycled. This situation has lead to the drawing up of many regulations on C and D waste during the past years forcing EU countries to include new measures for waste prevention and recycling. Among these measures, the mandatory obligation to quantify the C and D waste expected to be originated during a construction project is mandated. However, limited data is available on civil engineering projects. Therefore, the aim of this research study is to improve C and D waste management in railway projects, by developing a model for C and D waste quantification. For this purpose, we develop two equations which estimate in advance the amount, both in weight and volume, of the C and D waste likely to be generated in railway construction projects, including the category of C and D waste generated for the entire project.« less

  6. Patient characteristics and intervention effect as measured by Voice Handicap Index.

    PubMed

    Hengen, Johanna; Peterson, Malin; McAllister, Anita

    2017-07-01

    To analyze patients with a confirmed voice disorder in order to identify patterns regarding age, gender, and occupation compared to the general public. To explore effects of voice therapy according to the Voice Handicap Index (VHI) score pre- and post-therapy in relation to the number of sessions, age, and gender. Prospective cohort study. This study was conducted as a collaborative project between Linköping University and hospitals in the south-east health care region in Sweden. Six voice clinics participated by asking their patients voluntarily to complete the Swedish version of the VHI at the beginning and end of therapy. The two most prevalent diagnoses were dysphonia (43%) and phonasthenia (25%). Among the working population, the three most common occupational fields were education, health care, and child-care. The majority of the patients were women (74.3%), and the mean age of all patients was 55 years. A significant improvement in VHI scores was found after therapy, with an average decrease of 19 median points in total score and a substantial effect size (0.55). The number of sessions did not significantly correlate with the mean VHI score difference but had a weak correlation to the start and end scores. Increasing age correlated with a higher median VHI score both at the start and end of therapy but did not affect the average decrease between the two measurements.

  7. Parametric analyses of DEMO Divertor using two dimensional transient thermal hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Domalapally, Phani; Di Caro, Marco

    2018-05-01

    Among the options considered for cooling of the Plasma facing components of the DEMO reactor, water cooling is a conservative option because of its high heat removal capability. In this work a two-dimensional transient thermal hydraulic code is developed to support the design of the divertor for the projected DEMO reactor with water as a coolant. The mathematical model accounts for transient 2D heat conduction in the divertor section. Temperature-dependent properties are used for more accurate analysis. Correlations for single phase flow forced convection, partially developed subcooled nucleate boiling, fully developed subcooled nucleate boiling and film boiling are used to calculate the heat transfer coefficients on the channel side considering the swirl flow, wherein different correlations found in the literature are compared against each other. Correlation for the Critical Heat Flux is used to estimate its limit for a given flow conditions. This paper then investigates the results of the parametric analysis performed, whereby flow velocity, diameter of the coolant channel, thickness of the coolant pipe, thickness of the armor material, inlet temperature and operating pressure affect the behavior of the divertor under steady or transient heat fluxes. This code will help in understanding the basic parameterś effect on the behavior of the divertor, to achieve a better design from a thermal hydraulic point of view.

  8. The SmartBioPhone, a point of care vision under development through two European projects: OPTOLABCARD and LABONFOIL.

    PubMed

    Ruano-López, Jesus M; Agirregabiria, Maria; Olabarria, Garbiñe; Verdoy, Dolores; Bang, Dang D; Bu, Minqiang; Wolff, Anders; Voigt, Anja; Dziuban, Jan A; Walczak, Rafał; Berganzo, Javier

    2009-06-07

    This paper describes how sixteen partners from eight different countries across Europe are working together in two EU projects focused on the development of a point of care system. This system uses disposable Lab on a Chips (LOCs) that carry out the complete assay from sample preparation to result interpretation of raw samples. The LOC is either embedded in a flexible motherboard with the form of a smartcard (Labcard) or in a Skinpatch. The first project, OPTOLABCARD, extended and tested the use of a thick photoresit (SU-8) as a structural material to manufacture LOCs by lamination. This project produced several examples where SU-8 microfluidic circuitry revealed itself as a viable material for several applications, such as the integration on chip of a Polymerase Chain Reaction (PCR) that includes sample concentration, PCR amplification and optical detection of Salmonella spp. using clinical samples. The ongoing project, LABONFOIL, is using two results of OPTOLABCARD: the sample concentration method and the capability to fabricate flexible and ultra thin LOCs based on sheets instead of wafers. This rupture from the limited and expensive wafer surface heritage allows the development of a platform where LOCs are big enough to include all the sample preparation subcomponents at a low price. These LOCs will be used in four point of care applications: environment, food, cancer and drug monitoring. The user will obtain the results of the tests by connecting the Labcard/Skinpatch reader to a very popular interface (a smartphone), creating a new instrument namely "The SmartBioPhone". All standard smartphone capabilities will be at the disposal of the point of care instrument by a simple click. In order to guarantee the future mass production of these LOCs, the project will develop a large dry film equipment where LOCs will be fabricated at a low cost.

  9. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  10. Inverting x,y grid coordinates to obtain latitude and longitude in the vanderGrinten projection

    NASA Technical Reports Server (NTRS)

    Rubincam, D. P.

    1980-01-01

    The latitude and longitude of a point on the Earth's surface are found from its x,y grid coordinates in the vanderGrinten projection. The latitude is a solution of a cubic equation and the longitude a solution of a quadratic equation. Also, the x,y grid coordinates of a point on the Earth's surface can be found if its latitude and longitude are known by solving two simultaneous quadratic equations.

  11. Quantum canonical ensemble: A projection operator approach

    NASA Astrophysics Data System (ADS)

    Magnus, Wim; Lemmens, Lucien; Brosens, Fons

    2017-09-01

    Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.

  12. Self similarity of two point correlations in wall bounded turbulent flows

    NASA Technical Reports Server (NTRS)

    Hunt, J. C. R.; Moin, P.; Moser, R. D.; Spalart, P. R.

    1987-01-01

    The structure of turbulence at a height y from a wall is affected by the local mean shear at y, by the direct effect of the wall on the eddies, and by the action of other eddies close to or far from the wall. Some researchers believe that a single one of these mechanisms is dominant, while others believe that these effects have to be considered together. It is important to understand the relative importance of these effects in order to develop closure models, for example for the dissipation or for the Reynolds stress equation, and to understand the eddy structure of cross correlation functions and other measures. The specific objective was to examine the two point correlation, R sub vv, of the normal velocity component v near the wall in a turbulent channel flow and in a turbulent boundary layer. The preliminary results show that even in the inhomogeneous turbulent boundary layer, the two-point correlation function may have self similar forms. The results also show that the effects of shear and of blocking are equally important in the form of correlation functions for spacing normal to the wall. But for spanwise spacing, it was found that the eddy structure is quire different in these near flows. So any theory for turbulent structure must take both these effects into account.

  13. Noise and time delay induce critical point in a bistable system

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqiang; Nie, Linru; Yu, Lilong; Zhang, Xinyu

    2014-07-01

    We study relaxation time Tc of time-delayed bistable system driven by two cross-correlated Gaussian white noises that one is multiplicative and the other is additive. By means of numerical calculations, the results indicate that: (i) Combination of noise and time delay can induce two critical points about the relaxation time at some certain noise cross-correlation strength λ under the condition that the multiplicative intensity D equals to the additive noise intensity α. (ii) For each fixed D or α, there are two symmetrical critical points which locates in the regions of positive and negative correlations, respectively. Namely, as λ equals to the critical value λc, Tc is independent of the delay time and the result of Tc versus τ is a horizontal line, but as |λ|>|λc| (or |λ|<|λc|), the relaxation time Tc monotonically increases (or decreases) with the delay time increasing. (iii) In the presence of D = α, the change of λc with D is two symmetrical curves about the axis of λc = 0, and the critical value λc is close to zero for a smaller D, which approaches to +1 or -1 for a greater D.

  14. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  15. Descriptive and Computer Aided Drawing Perspective on an Unfolded Polyhedral Projection Surface

    NASA Astrophysics Data System (ADS)

    Dzwierzynska, Jolanta

    2017-10-01

    The aim of the herby study is to develop a method of direct and practical mapping of perspective on an unfolded prism polyhedral projection surface. The considered perspective representation is a rectilinear central projection onto a surface composed of several flat elements. In the paper two descriptive methods of drawing perspective are presented: direct and indirect. The graphical mapping of the effects of the representation is realized directly on the unfolded flat projection surface. That is due to the projective and graphical connection between points displayed on the polyhedral background and their counterparts received on the unfolded flat surface. For a significant improvement of the construction of line, analytical algorithms are formulated. They draw a perspective image of a segment of line passing through two different points determined by their coordinates in a spatial coordinate system of axis x, y, z. Compared to other perspective construction methods that use information about points, for computer vision and the computer aided design, our algorithms utilize data about lines, which are applied very often in architectural forms. Possibility of drawing lines in the considered perspective enables drawing an edge perspective image of an architectural object. The application of the changeable base elements of perspective as a horizon height and a station point location enable drawing perspective image from different viewing positions. The analytical algorithms for drawing perspective images are formulated in Mathcad software, however, they can be implemented in the majority of computer graphical packages, which can make drawing perspective more efficient and easier. The representation presented in the paper and the way of its direct mapping on the flat unfolded projection surface can find application in presentation of architectural space in advertisement and art.

  16. Conical Perspective Image of an Architectural Object Close to Human Perception

    NASA Astrophysics Data System (ADS)

    Dzwierzynska, Jolanta

    2017-10-01

    The aim of the study is to develop a method of computer aided constructing conical perspective of an architectural object, which is close to human perception. The conical perspective considered in the paper is a central projection onto a projection surface being a conical rotary surface or a fragment of it. Whereas, the centre of projection is a stationary point or a point moving on a circular path. The graphical mapping results of the perspective representation is realized directly on an unrolled flat projection surface. The projective relation between a range of points on a line and the perspective image of the same range of points received on a cylindrical projection surface permitted to derive formulas for drawing perspective. Next, the analytical algorithms for drawing perspective image of a straight line passing through any two points were formulated. It enabled drawing a perspective wireframe image of a given 3D object. The use of the moving view point as well as the application of the changeable base elements of perspective as the variables in the algorithms enable drawing conical perspective from different viewing positions. Due to this fact, the perspective drawing method is universal. The algorithms are formulated and tested in Mathcad Professional software, but can be implemented in AutoCAD and majority of computer graphical packages, which makes drawing a perspective image more efficient and easier. The presented conical perspective representation, and the convenient method of its mapping directly on the flat unrolled surface can find application for numerous advertisement and art presentations.

  17. Identification of underground mine workings with the use of global positioning system technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canty, G.A.; Everett, J.W.; Sharp, M.

    1998-12-31

    Identification of underground mine workings for well drilling is a difficult task given the limited resources available and lack of reliable information. Relic mine maps of questionable accuracy and difficulty in correlating the subsurface to the surface, make the process of locating wells arduous. With the development of global positioning system (GPS), specific locations on the earth can be identified with the aid of satellites. This technology can be applied to mine workings identification given a few necessary, precursory details. For an abandoned mine treatment project conducted by the University of Oklahoma, in conjunction with the Oklahoma Conservation Commission, amore » Trimble ProXL 8 channel GPS receiver was employed to locate specific points on the surface with respect to a mine map. A 1925 mine map was digitized into AutoCAD version 13 software. Surface features identified on the map, such as mine adits, were located and marked in the field using the GPS receiver. These features were than imported into AutoCAD and referenced with the same points drawn on the map. A rubber sheeting program, Multric, was used to tweak the points so the map features correlated with the surface points. The correlation of these features allowed the map to be geo-referenced with the surface. Specific drilling points were located on the digitized map and assigned a latitude and longitude. The GPS receiver, using real time differential correction, was used to locate these points in the field. This method was assumed to be relatively accurate, to within 5 to 15 feet.« less

  18. [Analysis of evaluation process of research projects submitted to the Fondo de Investigación Sanitaria, Spain].

    PubMed

    Prieto Carles, C; Gómez-Gerique, J; Gutiérrez Millet, V; Veiga de Cabo, J; Sanz Martul, E; Mendoza Hernández, J L

    2000-10-07

    At the present time it seems very clear that research improvement is both an unquestionable fact and the right way to develop technological innovation, services and patents. However, such improvement and corresponding finances needs to be done under fine and rigorous evaluation process as an assessment tool under which all the research projects applying to a public or private call for proposals should be submitted to assure a coherence point according to the investment to be made. At this end, the main target of this work has been focused to analysis and study the evaluation process traditionally made by Fondo de Investigación Sanitaria (FIS) as well as to propose most adequate modifications. A sample of 431 research projects corresponding to year 1998 proposal was analysed. The evaluation from FIS and ANEP (National Evaluation and Prospective Agency) was evaluated and scored (evaluation quality) in its main contents by 3 independent evaluators, the showed results submitted to a comparative frame between these agencies at indoor (FIS) and outdoor (FIS/ANEP) level. FIS evaluation had 20 commissions or areas of knowledge. The analysis indoor (FIS) clearly showed that evaluation quality was correlated to the assigned commission (F = 3.71; p < 0.001) and to the time last of the researched proposal (F = 3.42; p < 0.05) but no related to the evaluator. On the other hand, the quality of ANEP evaluation showed a correlated dependency of the three mentioned facts. In all terms, the ANEP evaluation was better than FIS for the three years time projects, but in did not show significant differences in one or two years time projects. In all cases, the evaluation with final results as negative (financing denied) showed an average quality higher than positive evaluation. The obtained results advice about the convenience of making some changes in the evaluative structure and to review the sort of FIS technical commissions focusing an improvement of the evaluation process.

  19. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  20. Blood glucose measurement in patients with suspected diabetic ketoacidosis: a comparison of Abbott MediSense PCx point-of-care meter values to reference laboratory values.

    PubMed

    Blank, Fidela S J; Miller, Moses; Nichols, James; Smithline, Howard; Crabb, Gillian; Pekow, Penelope

    2009-04-01

    The purpose of this study is to compare blood glucose levels measured by a point of care (POC) device to laboratory measurement using the same sample venous blood from patients with suspected diabetic ketoacidosis (DKA). A descriptive correlational design was used for this IRB-approved quality assurance project. The study site was the 50-bed BMC emergency department (ED) which has an annual census of over 100,000 patient visits. The convenience sample consisted of 54 blood samples from suspected DKA patients with orders for hourly blood draws for glucose measurement. Spearman correlations of the glucose POC values, reference lab values, and differences between the two, were evaluated. A chi-square test was used to evaluate the association between the acidosis status and FDA acceptability of POC values. Patient age range was 10-86 years; 63% were females; 46% had a final diagnosis of DKA. POC values underestimated glucose levels 93% of the time. There was a high correlation between the lab value and the magnitude of the difference, (lab minus POC value) indicating that the higher the true glucose value, the greater the difference between the lab and the POC value. A chi-square test showed no overall association between acidosis and FDA-acceptability. The POC values underestimated lab reported glucose levels in 50 of 54 cases even with the use of same venous sample sent to the lab, which make it highly unreliable for use in monitoring suspected DKA patients.

  1. Two-Point Orientation Discrimination Versus the Traditional Two-Point Test for Tactile Spatial Acuity Assessment

    PubMed Central

    Tong, Jonathan; Mao, Oliver; Goldreich, Daniel

    2013-01-01

    Two-point discrimination is widely used to measure tactile spatial acuity. The validity of the two-point threshold as a spatial acuity measure rests on the assumption that two points can be distinguished from one only when the two points are sufficiently separated to evoke spatially distinguishable foci of neural activity. However, some previous research has challenged this view, suggesting instead that two-point task performance benefits from an unintended non-spatial cue, allowing spuriously good performance at small tip separations. We compared the traditional two-point task to an equally convenient alternative task in which participants attempt to discern the orientation (vertical or horizontal) of two points of contact. We used precision digital readout calipers to administer two-interval forced-choice versions of both tasks to 24 neurologically healthy adults, on the fingertip, finger base, palm, and forearm. We used Bayesian adaptive testing to estimate the participants’ psychometric functions on the two tasks. Traditional two-point performance remained significantly above chance levels even at zero point separation. In contrast, two-point orientation discrimination approached chance as point separation approached zero, as expected for a valid measure of tactile spatial acuity. Traditional two-point performance was so inflated at small point separations that 75%-correct thresholds could be determined on all tested sites for fewer than half of participants. The 95%-correct thresholds on the two tasks were similar, and correlated with receptive field spacing. In keeping with previous critiques, we conclude that the traditional two-point task provides an unintended non-spatial cue, resulting in spuriously good performance at small spatial separations. Unlike two-point discrimination, two-point orientation discrimination rigorously measures tactile spatial acuity. We recommend the use of two-point orientation discrimination for neurological assessment. PMID:24062677

  2. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173

  3. Comparing Networks from a Data Analysis Perspective

    NASA Astrophysics Data System (ADS)

    Li, Wei; Yang, Jing-Yu

    To probe network characteristics, two predominant ways of network comparison are global property statistics and subgraph enumeration. However, they suffer from limited information and exhaustible computing. Here, we present an approach to compare networks from the perspective of data analysis. Initially, the approach projects each node of original network as a high-dimensional data point, and the network is seen as clouds of data points. Then the dispersion information of the principal component analysis (PCA) projection of the generated data clouds can be used to distinguish networks. We applied this node projection method to the yeast protein-protein interaction networks and the Internet Autonomous System networks, two types of networks with several similar higher properties. The method can efficiently distinguish one from the other. The identical result of different datasets from independent sources also indicated that the method is a robust and universal framework.

  4. Baryonic and mesonic 3-point functions with open spin indices

    NASA Astrophysics Data System (ADS)

    Bali, Gunnar S.; Collins, Sara; Gläßle, Benjamin; Heybrock, Simon; Korcyl, Piotr; Löffler, Marius; Rödl, Rudolf; Schäfer, Andreas

    2018-03-01

    We have implemented a new way of computing three-point correlation functions. It is based on a factorization of the entire correlation function into two parts which are evaluated with open spin-(and to some extent flavor-) indices. This allows us to estimate the two contributions simultaneously for many different initial and final states and momenta, with little computational overhead. We explain this factorization as well as its efficient implementation in a new library which has been written to provide the necessary functionality on modern parallel architectures and on CPUs, including Intel's Xeon Phi series.

  5. Effect of point defects and disorder on structural phase transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toulouse, J.

    1997-06-01

    Since the beginning in 1986, the object of this project has been Structural Phase Transitions (SPT) in real as opposed to ideal materials. The first stage of the study has been centered around the role of Point Defects in SPT`s. Our intent was to use the previous knowledge we had acquired in the study of point defects in non-transforming insulators and apply it to the study of point defects in insulators undergoing phase transitions. In non-transforming insulators, point defects, in low concentrations, marginally affect the bulk properties of the host. It is nevertheless possible by resonance or relaxation methods tomore » study the point defects themselves via their local motion. In transforming solids, however, close to a phase transition, atomic motions become correlated over very large distances; there, even point defects far removed from one another can undergo correlated motions which may strongly affect the transition behavior of the host. Near a structural transition, the elastic properties win be most strongly affected so as to either raise or decrease the transition temperature, prevent the transition from taking place altogether, or simply modify its nature and the microstructure or domain structure of the resulting phase. One of the well known practical examples is calcium-stabilized zirconia in which the high temperature cubic phase is stabilized at room temperature with greatly improved mechanical properties.« less

  6. Correlation Function Approach for Estimating Thermal Conductivity in Highly Porous Fibrous Materials

    NASA Technical Reports Server (NTRS)

    Martinez-Garcia, Jorge; Braginsky, Leonid; Shklover, Valery; Lawson, John W.

    2011-01-01

    Heat transport in highly porous fiber networks is analyzed via two-point correlation functions. Fibers are assumed to be long and thin to allow a large number of crossing points per fiber. The network is characterized by three parameters: the fiber aspect ratio, the porosity and the anisotropy of the structure. We show that the effective thermal conductivity of the system can be estimated from knowledge of the porosity and the correlation lengths of the correlation functions obtained from a fiber structure image. As an application, the effects of the fiber aspect ratio and the network anisotropy on the thermal conductivity is studied.

  7. Two-point spectral model for variable density homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Pal, Nairita; Kurien, Susan; Clark, Timothy; Aslangil, Denis; Livescu, Daniel

    2017-11-01

    We present a comparison between a two-point spectral closure model for buoyancy-driven variable density homogeneous turbulence, with Direct Numerical Simulation (DNS) data of the same system. We wish to understand how well a suitable spectral model might capture variable density effects and the transition to turbulence from an initially quiescent state. Following the BHRZ model developed by Besnard et al. (1990), the spectral model calculation computes the time evolution of two-point correlations of the density fluctuations with the momentum and the specific-volume. These spatial correlations are expressed as function of wavenumber k and denoted by a (k) and b (k) , quantifying mass flux and turbulent mixing respectively. We assess the accuracy of the model, relative to a full DNS of the complete hydrodynamical equations, using a and b as metrics. Work at LANL was performed under the auspices of the U.S. DOE Contract No. DE-AC52-06NA25396.

  8. Convex Hull Aided Registration Method (CHARM).

    PubMed

    Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian

    2017-09-01

    Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.

  9. High lateral resolution exploration using surface waves from noise records

    NASA Astrophysics Data System (ADS)

    Chávez-García, Francisco José Yokoi, Toshiaki

    2016-04-01

    Determination of the shear-wave velocity structure at shallow depths is a constant necessity in engineering or environmental projects. Given the sensitivity of Rayleigh waves to shear-wave velocity, subsoil structure exploration using surface waves is frequently used. Methods such as the spectral analysis of surface waves (SASW) or multi-channel analysis of surface waves (MASW) determine phase velocity dispersion from surface waves generated by an active source recorded on a line of geophones. Using MASW, it is important that the receiver array be as long as possible to increase the precision at low frequencies. However, this implies that possible lateral variations are discarded. Hayashi and Suzuki (2004) proposed a different way of stacking shot gathers to increase lateral resolution. They combined strategies used in MASW with the common mid-point (CMP) summation currently used in reflection seismology. In their common mid-point with cross-correlation method (CMPCC), they cross-correlate traces sharing CMP locations before determining phase velocity dispersion. Another recent approach to subsoil structure exploration is based on seismic interferometry. It has been shown that cross-correlation of a diffuse field, such as seismic noise, allows the estimation of the Green's Function between two receivers. Thus, a virtual-source seismic section may be constructed from the cross-correlation of seismic noise records obtained in a line of receivers. In this paper, we use the seismic interferometry method to process seismic noise records obtained in seismic refraction lines of 24 geophones, and analyse the results using CMPCC to increase the lateral resolution of the results. Cross-correlation of the noise records allows reconstructing seismic sections with virtual sources at each receiver location. The Rayleigh wave component of the Green's Functions is obtained with a high signal-to-noise ratio. Using CMPCC analysis of the virtual-source seismic lines, we are able to identify lateral variations of phase velocity inside the seismic line, and increase the lateral resolution compared with results of conventional analysis.

  10. Kinematics of velocity and vorticity correlations in turbulent flow

    NASA Technical Reports Server (NTRS)

    Bernard, P. S.

    1983-01-01

    The kinematic problem of calculating second-order velocity moments from given values of the vorticity covariance is examined. Integral representation formulas for second-order velocity moments in terms of the two-point vorticity correlation tensor are derived. The special relationships existing between velocity moments in isotropic turbulence are expressed in terms of the integral formulas yielding several kinematic constraints on the two-point vorticity correlation tensor in isotropic turbulence. Numerical evaluation of these constraints suggests that a Gaussian curve may be the only form of the longitudinal velocity correlation coefficient which is consistent with the requirement of isotropy. It is shown that if this is the case, then a family of exact solutions to the decay of isotropic turbulence may be obtained which contains Batchelor's final period solution as a special case. In addition, the computed results suggest a method of approximating the integral representation formulas in general turbulent shear flows.

  11. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  12. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  13. Transverse correlations in triphoton entanglement: Geometrical and physical optics

    NASA Astrophysics Data System (ADS)

    Wen, Jianming; Xu, P.; Rubin, Morton H.; Shih, Yanhua

    2007-08-01

    The transverse correlation of triphoton entanglement generated within a single crystal is analyzed. Among many interesting features of the transverse correlation, they arise from the spectral function F of the triphoton state produced in the parametric processes. One consequence of transverse effects of entangled states is quantum imaging, which is theoretically studied in photon counting measurements. Klyshko’s two-photon advanced-wave picture is found to be applicable to the multiphoton entanglement with some modifications. We found that in the two-photon coincidence counting measurement by using triphoton entanglement, although the Gaussian thin lens equation (GTLE) holds, the imaging shown in coincidences is obscure and has a poor quality. This is because of tracing the remaining transverse modes in the untouched beam. In the triphoton imaging experiments, two kinds of cases have been examined. For the case that only one object with one thin lens is placed in the system, we found that the GTLE holds as expected in the triphoton coincidences and the effective distance between the lens and imaging plane is the parallel combination of two distances between the lens and two detectors weighted by wavelengths, which behaves as the parallel combination of resistors in the electromagnetism theory. Only in this case, a point-point correspondence for forming an image is well-accomplished. However, when two objects or two lenses are inserted in the system, though the GTLEs are well-satisfied, in general a point-point correspondence for imaging cannot be established. Under certain conditions, two blurred images may be observed in the coincidence counts. We have also studied the ghost interference-diffraction experiments by using double slits as apertures in triphoton entanglement. It was found that when two double slits are used in two optical beams, the interference-diffraction patterns show unusual features compared with the two-photon case. This unusual behavior is a destructive interference between two amplitudes for two photons crossing two double slits.

  14. The application of vector concepts on two skew lines

    NASA Astrophysics Data System (ADS)

    Alghadari, F.; Turmudi; Herman, T.

    2018-01-01

    The purpose of this study is knowing how to apply vector concepts on two skew lines in three-dimensional (3D) coordinate and its utilization. Several mathematical concepts have a related function for the other, but the related between the concept of vector and 3D have not applied in learning classroom. In fact, there are studies show that female students have difficulties in learning of 3D than male. It is because of personal spatial intelligence. The relevance of vector concepts creates both learning achievement and mathematical ability of male and female students enables to be balanced. The distance like on a cube, cuboid, or pyramid whose are drawn on the rectangular coordinates of a point in space. Two coordinate points of the lines can be created a vector. The vector of two skew lines has the shortest distance and the angle. Calculating of the shortest distance is started to create two vectors as a representation of line by vector position concept, next to determining a norm-vector of two vector which was obtained by cross-product, and then to create a vector from two combination of pair-points which was passed by two skew line, the shortest distance is scalar orthogonal projection of norm-vector on a vector which is a combination of pair-points. While calculating the angle are used two vectors as a representation of line to dot-product, and the inverse of cosine is yield. The utilization of its application on mathematics learning and orthographic projection method.

  15. Modeling of blob-hole correlations in GPI edge turbulence data

    NASA Astrophysics Data System (ADS)

    Myra, J. R.; Russell, D. A.; Zweben, S. J.

    2017-10-01

    Gas-puff imaging (GPI) observations made on NSTX have revealed two-point spatial correlation patterns in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this work, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlation patterns that are qualitatively similar to the GPI data in many respects. Effects of the reference location (confined surfaces or scrape-off layer), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored. The possibility of using the model to ascertain new information about edge turbulence is discussed. Work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG02-02ER54678.

  16. Free Fermions and the Classical Compact Groups

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  17. Stochastic transformation of points in polygons according to the Voronoi tessellation: microstructural description.

    PubMed

    Di Vito, Alessia; Fanfoni, Massimo; Tomellini, Massimo

    2010-12-01

    Starting from a stochastic two-dimensional process we studied the transformation of points in disks and squares following a protocol according to which at any step the island size increases proportionally to the corresponding Voronoi tessera. Two interaction mechanisms among islands have been dealt with: coalescence and impingement. We studied the evolution of the island density and of the island size distribution functions, in dependence on island collision mechanisms for both Poissonian and correlated spatial distributions of points. The island size distribution functions have been found to be invariant with the fraction of transformed phase for a given stochastic process. The n(Θ) curve describing the island decay has been found to be independent of the shape (apart from high correlation degrees) and interaction mechanism.

  18. The association between gas and galaxies - II. The two-point correlation function

    NASA Astrophysics Data System (ADS)

    Wilman, R. J.; Morris, S. L.; Jannuzi, B. T.; Davé, R.; Shone, A. M.

    2007-02-01

    We measure the two-point correlation function, ξAG, between galaxies and quasar absorption-line systems at z < 1, using the data set of Morris & Jannuzi on 16 lines-of-sight (LOS) with ultraviolet (UV) spectroscopy and galaxy multi-object spectroscopy (Paper I). The measurements are made in 2D redshift space out to π = 20h-1 Mpc (comoving) along the LOS and out to σ = 2h-1 Mpc projected; as a function of HI column density in the range NHI = 1013-1019cm-2, also for CIV absorption systems, and as a function of galaxy spectral type. This extends the absorber-galaxy pair analysis of Paper I. We find that the amplitude of the peak in ξAG at the smallest separations increases slowly as the lower limit on NHI is increased from 1013 to 1016cm-2, and then jumps sharply (albeit with substantial uncertainties) for NHI > 1017cm-2. For CIV absorbers, the peak strength of ξAG is roughly comparable to that of HI absorbers with NHI > 1016.5cm-2, consistent with the finding that the CIV absorbers are associated with strong HI absorbers. We do not reproduce the differences reported by Chen et al. between 1D ξAG measurements using galaxy subsamples of different spectral types. However, the full impact on the measurements of systematic differences in our samples is hard to quantify. We compare the observations with smoothed particle hydrodynamical (SPH) simulations and discover that in the observations ξAG is more concentrated to the smallest separations than in the simulations. The latter also display a `finger of god' elongation of ξAG along the LOS in redshift space, which is absent from our data, but similar to that found by Ryan-Weber for the cross-correlation of quasar absorbers and HI-emission-selected galaxies. The physical origin of these `fingers of god' is unclear, and we thus highlight several possible areas for further investigation.

  19. Mapping the current–current correlation function near a quantum critical point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prodan, Emil, E-mail: prodan@yu.edu; Bellissard, Jean

    2016-05-15

    The current–current correlation function is a useful concept in the theory of electron transport in homogeneous solids. The finite-temperature conductivity tensor as well as Anderson’s localization length can be computed entirely from this correlation function. Based on the critical behavior of these two physical quantities near the plateau–insulator or plateau–plateau transitions in the integer quantum Hall effect, we derive an asymptotic formula for the current–current correlation function, which enables us to make several theoretical predictions about its generic behavior. For the disordered Hofstadter model, we employ numerical simulations to map the current–current correlation function, obtain its asymptotic form near amore » critical point and confirm the theoretical predictions.« less

  20. Spatial correlation of atmospheric wind at scales relevant for large scale wind turbines

    NASA Astrophysics Data System (ADS)

    Bardal, L. M.; Sætran, L. R.

    2016-09-01

    Wind measurements a short distance upstream of a wind turbine can provide input for a feedforward wind turbine controller. Since the turbulent wind field will be different at the point/plane of measurement and the rotor plane the degree of correlation between wind speed at two points in space both in the longitudinal and lateral direction should be evaluated. This study uses a 2D array of mast mounted anemometers to evaluate cross-correlation of longitudinal wind speed. The degree of correlation is found to increase with height and decrease with atmospheric stability. The correlation is furthermore considerably larger for longitudinal separation than for lateral separation. The integral length scale of turbulence is also considered.

  1. Correlational Neural Networks.

    PubMed

    Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman

    2016-02-01

    Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.

  2. Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure

    PubMed Central

    Skvortsova, Elena B.; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity. PMID:26010779

  3. Universal spatial correlation functions for describing and reconstructing soil microstructure.

    PubMed

    Karsanina, Marina V; Gerke, Kirill M; Skvortsova, Elena B; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity.

  4. Culture-Specific Testing: Part 1.

    ERIC Educational Resources Information Center

    Williams, Robert L., Ed.

    1981-01-01

    In five articles provides a rationale for the development of culturally specific tests, presents research on their use, and discusses clinical uses. Focuses on two Afro-centric projective tests: The Thematic Apperception Test and Themes Concerning Blacks. Criticizes use of traditional projective tests and points out viable alternatives. (JAC)

  5. Integration of imagery and cartographic data through a common map base

    NASA Technical Reports Server (NTRS)

    Clark, J.

    1983-01-01

    Several disparate data types are integrated by using control points as the basis for spatially registering the data to a map base. The data are reprojected to match the coordinates of the reference UTM (Universal Transverse Mercator) map projection, as expressed in lines and samples. Control point selection is the most critical aspect of integrating the Thematic Mapper Simulator MSS imagery with the cartographic data. It is noted that control points chosen from the imagery are subject to error from mislocated points, either points that did not correlate well to the reference map or minor pixel offsets because of interactive cursorring errors. Errors are also introduced in map control points when points are improperly located and digitized, leading to inaccurate latitude and longitude coordinates. Nonsystematic aircraft platform variations, such as yawl, pitch, and roll, affect the spatial fidelity of the imagery in comparison with the quadrangles. Features in adjacent flight paths do not always correspond properly owing to the systematic panorama effect and alteration of flightline direction, as well as platform variations.

  6. Two-point correlators revisited: fast and slow scales in multifield models of inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghersi, José T. Gálvez; Frolov, Andrei V., E-mail: joseg@sfu.ca, E-mail: frolov@sfu.ca

    2017-05-01

    We study the structure of two-point correlators of the inflationary field fluctuations in order to improve the accuracy and efficiency of the existing methods to calculate primordial spectra. We present a description motivated by the separation of the fast and slow evolving components of the spectrum which is based on Cholesky decomposition of the field correlator matrix. Our purpose is to rewrite all the relevant equations of motion in terms of slowly varying quantities. This is important in order to consider the contribution from high-frequency modes to the spectrum without affecting computational performance. The slow-roll approximation is not required tomore » reproduce the main distinctive features in the power spectrum for each specific model of inflation.« less

  7. Developing Telecommunication Linkages for Microcomputer-Aided Instruction. TDC Research Report No. 1.

    ERIC Educational Resources Information Center

    Blinn, Charles R.; And Others

    A project undertaken at the University of Minnesota evaluated two microcomputer teletraining systems (audiographic conferencing) to determine the effectiveness of this technology for point-to-point and multipoint distance education. System design requirements included broadcast keystrokes, error checking, master-slave linkages, simultaneous voice…

  8. Detecting correlation changes in multivariate time series: A comparison of four non-parametric change point detection methods.

    PubMed

    Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Grassmann, Mariel; Ceulemans, Eva

    2017-06-01

    Change point detection in multivariate time series is a complex task since next to the mean, the correlation structure of the monitored variables may also alter when change occurs. DeCon was recently developed to detect such changes in mean and\\or correlation by combining a moving windows approach and robust PCA. However, in the literature, several other methods have been proposed that employ other non-parametric tools: E-divisive, Multirank, and KCP. Since these methods use different statistical approaches, two issues need to be tackled. First, applied researchers may find it hard to appraise the differences between the methods. Second, a direct comparison of the relative performance of all these methods for capturing change points signaling correlation changes is still lacking. Therefore, we present the basic principles behind DeCon, E-divisive, Multirank, and KCP and the corresponding algorithms, to make them more accessible to readers. We further compared their performance through extensive simulations using the settings of Bulteel et al. (Biological Psychology, 98 (1), 29-42, 2014) implying changes in mean and in correlation structure and those of Matteson and James (Journal of the American Statistical Association, 109 (505), 334-345, 2014) implying different numbers of (noise) variables. KCP emerged as the best method in almost all settings. However, in case of more than two noise variables, only DeCon performed adequately in detecting correlation changes.

  9. Correlation of the ionisation response at selected points of IC sensitive regions with SEE sensitivity parameters under pulsed laser irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordienko, A V; Mavritskii, O B; Egorov, A N

    2014-12-31

    The statistics of the ionisation response amplitude measured at selected points and their surroundings within sensitive regions of integrated circuits (ICs) under focused femtosecond laser irradiation is obtained for samples chosen from large batches of two types of ICs. A correlation between these data and the results of full-chip scanning is found for each type. The criteria for express validation of IC single-event effect (SEE) hardness based on ionisation response measurements at selected points are discussed. (laser applications and other topics in quantum electronics)

  10. 2-Point microstructure archetypes for improved elastic properties

    NASA Astrophysics Data System (ADS)

    Adams, Brent L.; Gao, Xiang

    2004-01-01

    Rectangular models of material microstructure are described by their 1- and 2-point (spatial) correlation statistics of placement of local state. In the procedure described here the local state space is described in discrete form; and the focus is on placement of local state within a finite number of cells comprising rectangular models. It is illustrated that effective elastic properties (generalized Hashin Shtrikman bounds) can be obtained that are linear in components of the correlation statistics. Within this framework the concept of an eigen-microstructure within the microstructure hull is useful. Given the practical innumerability of the microstructure hull, however, we introduce a method for generating a sequence of archetypes of eigen-microstructure, from the 2-point correlation statistics of local state, assuming that the 1-point statistics are stationary. The method is illustrated by obtaining an archetype for an imaginary two-phase material where the objective is to maximize the combination C_{xxxx}^{*} + C_{xyxy}^{*}

  11. Innovations in the Analysis of Chandra-ACIS Observations

    NASA Astrophysics Data System (ADS)

    Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.

    2010-05-01

    As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.

  12. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.

  13. Weyl semimetals in optical lattices: moving and merging of Weyl points, and hidden symmetry at Weyl points

    PubMed Central

    Hou, Jing-Min; Chen, Wei

    2016-01-01

    We propose to realize Weyl semimetals in a cubic optical lattice. We find that there exist three distinct Weyl semimetal phases in the cubic optical lattice for different parameter ranges. One of them has two pairs of Weyl points and the other two have one pair of Weyl points in the Brillouin zone. For a slab geometry with (010) surfaces, the Fermi arcs connecting the projections of Weyl points with opposite topological charges on the surface Brillouin zone is presented. By adjusting the parameters, the Weyl points can move in the Brillouin zone. Interestingly, for two pairs of Weyl points, as one pair of them meet and annihilate, the originial two Fermi arcs coneect into one. As the remaining Weyl points annihilate further, the Fermi arc vanishes and a gap is opened. Furthermore, we find that there always exists a hidden symmetry at Weyl points, regardless of anywhere they located in the Brillouin zone. The hidden symmetry has an antiunitary operator with its square being −1. PMID:27644114

  14. Multi-point estimation of total energy expenditure: a comparison between zinc-reduction and platinum-equilibration methodologies.

    PubMed

    Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V

    2003-12-15

    Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.

  15. Scalar rate correlation at a turbulent liquid free surface - A two-regime correlation for high Schmidt numbers

    NASA Technical Reports Server (NTRS)

    Khoo, Boo-Cheong; Sonin, Ain A.

    1992-01-01

    An experimental correlation is derived for gas absorption at a turbulent, shear-free liquid interface. The correlation is expressed in terms of the liquid-side turbulence intensity, liquid-side macroscale, and the properties of the diffusing gas and solvent. The transfer coefficient increases linearly with rms velocity up to a point where the eddy Reynolds number reaches a critical (Schmidt number dependent) value. At higher velocities, there is a more rapid linear rise. The slope of the lower Reynolds number region is proportional to the square root of the diffusivity; at Reynolds numbers much higher than that of the break point, the slope becomes independent of diffusivity.

  16. Scale-dependent cyclone-anticyclone asymmetry in a forced rotating turbulence experiment

    NASA Astrophysics Data System (ADS)

    Gallet, B.; Campagne, A.; Cortet, P.-P.; Moisy, F.

    2014-03-01

    We characterize the statistical and geometrical properties of the cyclone-anticyclone asymmetry in a statistically steady forced rotating turbulence experiment. Turbulence is generated by a set of vertical flaps which continuously inject velocity fluctuations towards the center of a tank mounted on a rotating platform. We first characterize the cyclone-anticyclone asymmetry from conventional single-point vorticity statistics. We propose a phenomenological model to explain the emergence of the asymmetry in the experiment, from which we predict scaling laws for the root-mean-square velocity in good agreement with the experimental data. We further quantify the cyclone-anticyclone asymmetry using a set of third-order two-point velocity correlations. We focus on the correlations which are nonzero only if the cyclone-anticyclone symmetry is broken. They offer two advantages over single-point vorticity statistics: first, they are defined from velocity measurements only, so an accurate resolution of the Kolmogorov scale is not required; second, they provide information on the scale-dependence of the cyclone-anticyclone asymmetry. We compute these correlation functions analytically for a random distribution of independent identical vortices. These model correlations describe well the experimental ones, indicating that the cyclone-anticyclone asymmetry is dominated by the large-scale long-lived cyclones.

  17. Grid point extraction and coding for structured light system

    NASA Astrophysics Data System (ADS)

    Song, Zhan; Chung, Ronald

    2011-09-01

    A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.

  18. Two-point correlation function for Dirichlet L-functions

    NASA Astrophysics Data System (ADS)

    Bogomolny, E.; Keating, J. P.

    2013-03-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.

  19. Dynamical correlation functions of the quadratic coupling spin-Boson model

    NASA Astrophysics Data System (ADS)

    Zheng, Da-Chuan; Tong, Ning-Hua

    2017-06-01

    The spin-boson model with quadratic coupling is studied using the bosonic numerical renormalization group method. We focus on the dynamical auto-correlation functions {C}O(ω ), with the operator \\hat{O} taken as {\\hat{{{σ }}}}x, {\\hat{{{σ }}}}z, and \\hat{X}, respectively. In the weak-coupling regime α < {α }{{c}}, these functions show power law ω-dependence in the small frequency limit, with the powers 1+2s, 1+2s, and s, respectively. At the critical point α ={α }{{c}} of the boson-unstable quantum phase transition, the critical exponents y O of these correlation functions are obtained as {y}{{{σ }}x}={y}{{{σ }}z}=1-2s and {y}X=-s, respectively. Here s is the bath index and X is the boson displacement operator. Close to the spin flip point, the high frequency peak of {C}{{{σ }}x}(ω ) is broadened significantly and the line shape changes qualitatively, showing enhanced dephasing at the spin flip point. Project supported by the National Key Basic Research Program of China (Grant No. 2012CB921704), the National Natural Science Foundation of China (Grant No. 11374362), the Fundamental Research Funds for the Central Universities, China, and the Research Funds of Renmin University of China (Grant No. 15XNLQ03).

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, L.F.

    Calculations for the two-point correlation functions in the scaling limit for two statistical models are presented. In Part I, the Ising model with a linear defect is studied for T < T/sub c/ and T > T/sub c/. The transfer matrix method of Onsager and Kaufman is used. The energy-density correlation is given by functions related to the modified Bessel functions. The dispersion expansion for the spin-spin correlation functions are derived. The dominant behavior for large separations at T not equal to T/sub c/ is extracted. It is shown that these expansions lead to systems of Fredholm integral equations. Inmore » Part II, the electric correlation function of the eight-vertex model for T < T/sub c/ is studied. The eight vertex model decouples to two independent Ising models when the four spin coupling vanishes. To first order in the four-spin coupling, the electric correlation function is related to a three-point function of the Ising model. This relation is systematically investigated and the full dispersion expansion (to first order in four-spin coupling) is obtained. The results is a new kind of structure which, unlike those of many solvable models, is apparently not expressible in terms of linear integral equations.« less

  1. Optical projectors simulate human eyes to establish operator's field of view

    NASA Technical Reports Server (NTRS)

    Beam, R. A.

    1966-01-01

    Device projects visual pattern limits of the field of view of an operator as his eyes are directed at a given point on a control panel. The device, which consists of two projectors, provides instant evaluation of visual ability at a point on a panel.

  2. Localised burst reconstruction from space-time PODs in a turbulent channel

    NASA Astrophysics Data System (ADS)

    Garcia-Gutierrez, Adrian; Jimenez, Javier

    2017-11-01

    The traditional proper orthogonal decomposition of the turbulent velocity fluctuations in a channel is extended to time under the assumption that the attractor is statistically stationary and can be treated as periodic for long-enough times. The objective is to extract space- and time-localised eddies that optimally represent the kinetic energy (and two-event correlation) of the flow. Using time-resolved data of a small-box simulation at Reτ = 1880 , minimal for y / h 0.25 , PODs are computed from the two-point spectral-density tensor Φ(kx ,kz , y ,y' , ω) . They are Fourier components in x, z and time, and depend on y and on the temporal frequency ω, or, equivalently, on the convection velocity c = ω /kx . Although the latter depends on y, a spatially and temporally localised `burst' can be synthesised by adding a range of PODs with specific phases. The results are localised bursts that are amplified and tilted, in a time-periodic version of Orr-like behaviour. Funded by the ERC COTURB project.

  3. Broadband Tomography System: Direct Time-Space Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    Biagi, E.; Capineri, Lorenzo; Castellini, Guido; Masotti, Leonardo F.; Rocchi, Santina

    1989-10-01

    In this paper a new ultrasound tomographic image algorithm is presented. A complete laboratory system is built up to test the algorithm in experimental conditions. The proposed system is based on a physical model consisting of a bidimensional distribution of single scattering elements. Multiple scattering is neglected, so Born approximation is assumed. This tomographic technique only requires two orthogonal scanning sections. For each rotational position of the object, data are collected by means of the complete data set method in transmission mode. After a numeric envelope detection, the received signals are back-projected in the space-domain through a scalar function. The reconstruction of each scattering element is accomplished by correlating the ultrasound time of flight and attenuation with the points' loci given by the possible positions of the scattering element. The points' locus is represented by an ellipse with the focuses located on the transmitter and receiver positions. In the image matrix the ellipses' contributions are coherently summed in the position of the scattering element. Computer simulations of cylindrical-shaped objects have pointed out the performances of the reconstruction algorithm. Preliminary experimental results show the laboratory system features. On the basis of these results an experimental procedure to test the confidence and repeatability of ultrasonic measurements on human carotid vessel is proposed.

  4. Fast and accurate computation of projected two-point functions

    NASA Astrophysics Data System (ADS)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithmOur code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  5. Correlation between capillary oxygen saturation and small intestinal wall thickness in the equine colic patient

    PubMed Central

    Mirle, Elisabeth; Wogatzki, Anna; Kunzmann, Robert; Schoenfelder, Axel M; Litzke, Lutz F

    2017-01-01

    The surgical evaluation of haemorrhagic infarcted intestine and the decision for or against bowel resection require a lot of experience and are subjective. The aim of this prospective, clinical study was to examine the correlation between oxygen saturation and small intestinal wall (IW) thickness, using two objective methods. In 22 colicky horses, the blood flow, oxygen saturation and relative amount of haemoglobin were measured intraoperatively via laser Doppler and white light spectroscopy (O2C, oxygen to see, LEA Medizintechnik) at six measuring points (MPs) in small and large intestines. Furthermore, the IW thickness was measured ultrasonographically. Nine of 22 horses had an increased small IW thickness greater than 4 mm (Freeman 2002, Scharner and others 2002, le Jeune and Whitcomb 2014) at measuring point 1 (MP1) (strangulated segment), four horses had a thickened bowel wall at measuring point 3 (MP3) (poststenotic) and one at measuring point 2 (MP2). The oxygen saturation was 0 at MP1 in six horses, at MP3 in two horses and at MP2 (prestenotic) in one. Oxygen saturation and small IW thickness were independent of each other at MP1 and MP2. At MP3, the two parameters were negatively correlated. In summary, it is not possible to draw conclusions about oxygen saturation based on IW thickness. PMID:28761667

  6. Correlation between capillary oxygen saturation and small intestinal wall thickness in the equine colic patient.

    PubMed

    Mirle, Elisabeth; Wogatzki, Anna; Kunzmann, Robert; Schoenfelder, Axel M; Litzke, Lutz F

    2017-01-01

    The surgical evaluation of haemorrhagic infarcted intestine and the decision for or against bowel resection require a lot of experience and are subjective. The aim of this prospective, clinical study was to examine the correlation between oxygen saturation and small intestinal wall (IW) thickness, using two objective methods. In 22 colicky horses, the blood flow, oxygen saturation and relative amount of haemoglobin were measured intraoperatively via laser Doppler and white light spectroscopy (O2C, oxygen to see, LEA Medizintechnik) at six measuring points (MPs) in small and large intestines. Furthermore, the IW thickness was measured ultrasonographically. Nine of 22 horses had an increased small IW thickness greater than 4 mm (Freeman 2002, Scharner and others 2002, le Jeune and Whitcomb 2014) at measuring point 1 (MP1) (strangulated segment), four horses had a thickened bowel wall at measuring point 3 (MP3) (poststenotic) and one at measuring point 2 (MP2). The oxygen saturation was 0 at MP1 in six horses, at MP3 in two horses and at MP2 (prestenotic) in one. Oxygen saturation and small IW thickness were independent of each other at MP1 and MP2. At MP3, the two parameters were negatively correlated. In summary, it is not possible to draw conclusions about oxygen saturation based on IW thickness.

  7. Using cross-correlations of random wavefields for surface waves tomography and structural health monitoring.

    NASA Astrophysics Data System (ADS)

    Sabra, K.

    2006-12-01

    The random nature of noise and scattered fields tends to suggest limited utility. Indeed, seismic or acoustic fields from random sources or scatterers are often considered to be incoherent, but there is some coherence between two sensors that receive signals from the same individual source or scatterer. An estimate of the Green's function (or impulse response) between two points can be obtained from the cross-correlation of random wavefields recorded at these two points. Recent theoretical and experimental studies in ultrasonics, underwater acoustics, structural monitoring and seismology have investigated this technique in various environments and frequency ranges. These results provide a means for passive imaging using only the random wavefields, without the use of active sources. The coherent wavefronts emerge from a correlation process that accumulates contributions over time from random sources whose propagation paths pass through both receivers. Results will be presented from experiments using ambient noise cross-correlations for the following applications: 1) passive surface waves tomography from ocean microseisms and 2) structural health monitoring of marine and airborne structures embedded in turbulent flow.

  8. Statistical Study of Turbulence: Spectral Functions and Correlation Coefficients

    NASA Technical Reports Server (NTRS)

    Frenkiel, Francois N.

    1958-01-01

    In reading the publications on turbulence of different authors, one often runs the risk of confusing the various correlation coefficients and turbulence spectra. We have made a point of defining, by appropriate concepts, the differences which exist between these functions. Besides, we introduce in the symbols a few new characteristics of turbulence. In the first chapter, we study some relations between the correlation coefficients and the different turbulence spectra. Certain relations are given by means of demonstrations which could be called intuitive rather than mathematical. In this way we demonstrate that the correlation coefficients between the simultaneous turbulent velocities at two points are identical, whether studied in Lagrange's or in Euler's systems. We then consider new spectra of turbulence, obtained by study of the simultaneous velocities along a straight line of given direction. We determine some relations between these spectra and the correlation coefficients. Examining the relation between the spectrum of the turbulence measured at a fixed point and the longitudinal-correlation curve given by G. I. Taylor, we find that this equation is exact only when the coefficient is very small.

  9. Observed Type II supernova colours from the Carnegie Supernova Project-I

    NASA Astrophysics Data System (ADS)

    de Jaeger, T.; Anderson, J. P.; Galbany, L.; González-Gaitán, S.; Hamuy, M.; Phillips, M. M.; Stritzinger, M. D.; Contreras, C.; Folatelli, G.; Gutiérrez, C. P.; Hsiao, E. Y.; Morrell, N.; Suntzeff, N. B.; Dessart, L.; Filippenko, A. V.

    2018-06-01

    We present a study of observed Type II supernova (SN II) colours using optical/near-infrared photometric data from the Carnegie Supernovae Project-I. We analyse four colours (B - V, u - g, g - r, and g - Y) and find that SN II colour curves can be described by two linear regimes during the photospheric phase. The first (s1, colour) is steeper and has a median duration of ˜40 d. The second, shallower slope (s2, colour) lasts until the end of the `plateau' (˜80 d). The two slopes correlate in the sense that steeper initial colour curves also imply steeper colour curves at later phases. As suggested by recent studies, SNe II form a continuous population of objects from the colour point of view as well. We investigate correlations between the observed colours and a range of photometric and spectroscopic parameters including the absolute magnitude, the V-band light-curve slopes, and metal-line strengths. We find that less luminous SNe II appear redder, a trend that we argue is not driven by uncorrected host-galaxy reddening. While there is significant dispersion, we find evidence that redder SNe II (mainly at early epochs) display stronger metal-line equivalent widths. Host-galaxy reddening does not appear to be a dominant parameter, neither driving observed trends nor dominating the dispersion in observed colours. Intrinsic SN II colours are most probably dominated by photospheric temperature differences, with progenitor metallicity possibly playing a minor role. Such temperature differences could be related to differences in progenitor radius, together with the presence or absence of circumstellar material close to the progenitor stars.

  10. Individual set-point and gain of emmetropization in chickens.

    PubMed

    Tepelus, Tudor Cosmin; Schaeffel, Frank

    2010-01-01

    During the developmental process of emmetropization evidence shows that visual feedback guides the eye as it approaches a refractive state close to zero, or slightly hyperopic. How this "set-point" is internally defined, in the presence of continuous shifts of the focal plane with different viewing distances and accommodation, remains unclear. Minimizing defocus blur over time should produce similar end-point refractions in different individuals. However, we found that individual chickens display considerable variability in their set-point refractive states, despite that they all had the same visual experience. This variability is not random since the refractions in both eyes were highly correlated - even though it is known that they can emmetropize independently. Furthermore, if chicks underwent a period of experimentally induced ametropia, they returned to their individual set-point refractions during recovery (correlation of the refractions before treatment versus after recovery: n=19 chicks, 38 eyes, left eyes: slope 1.01, R=0.860; right eyes: slope 0.85, R=0.610, p<0.001, linear regression). Also, the induced deprivation myopia was correlated in both eyes (n=18 chicks, 36 eyes, p<0.01, orthogonal regression). If chicks were treated with spectacle lenses, the compensatory changes in refraction were, on average, appropriate but individual chicks displayed variable responses. Again, the refractions of both eyes remained correlated (negative lenses, n=18 chicks, 36 eyes, slope 0.89, R=0.504, p<0.01, positive lenses: n=21 chicks, 42 eyes, slope 1.14, R=0.791, p<0.001). The amount of deprivation myopia that developed in two successive treatment cycles, with an intermittent period of recovery, was not correlated; only vitreous chamber growth was almost significantly correlated in both cycles (n=7 chicks, 14 eyes; p<0.05). The amounts of ametropia and vitreous chamber changes induced in two successive cycles of treatment, first with lenses and then with diffusers, were also not correlated, suggesting that the "gains of lens compensation" are different from those in deprivation myopia. In summary, (1) there appears to be an endogenous, possibly genetic, definition of the set-point of emmetropization in each individual, which is similar in both eyes, (2) visual conditions that induce ametropia produce variable changes in refractions, with high correlations between both eyes, (3) overall, the "gain of emmetropization" appears only weakly controlled by endogenous factors.

  11. Converging Redundant Sensor Network Information for Improved Building Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dale Tiller; D. Phil; Gregor Henze

    2007-09-30

    This project investigated the development and application of sensor networks to enhance building energy management and security. Commercial, industrial and residential buildings often incorporate systems used to determine occupancy, but current sensor technology and control algorithms limit the effectiveness of these systems. For example, most of these systems rely on single monitoring points to detect occupancy, when more than one monitoring point could improve system performance. Phase I of the project focused on instrumentation and data collection. During the initial project phase, a new occupancy detection system was developed, commissioned and installed in a sample of private offices and open-planmore » office workstations. Data acquisition systems were developed and deployed to collect data on space occupancy profiles. Phase II of the project demonstrated that a network of several sensors provides a more accurate measure of occupancy than is possible using systems based on single monitoring points. This phase also established that analysis algorithms could be applied to the sensor network data stream to improve the accuracy of system performance in energy management and security applications. In Phase III of the project, the sensor network from Phase I was complemented by a control strategy developed based on the results from the first two project phases: this controller was implemented in a small sample of work areas, and applied to lighting control. Two additional technologies were developed in the course of completing the project. A prototype web-based display that portrays the current status of each detector in a sensor network monitoring building occupancy was designed and implemented. A new capability that enables occupancy sensors in a sensor network to dynamically set the 'time delay' interval based on ongoing occupant behavior in the space was also designed and implemented.« less

  12. Performance Based Logistics... What’s Stopping Us

    DTIC Science & Technology

    2016-03-01

    performance-based life cycle product support, where outcomes are acquired through performance-based arrangements that deliver Warfighter requirements and...correlates to the acquisition life cycle framework: spend the time and effort to identify and lock in the PBL requirements; conduct an analysis to...PDASD[L&MR]) on PBL strategies. The study, Project Proof Point: A Study to Determine the Impact of Performance Based Logistics (PBL) on Life Cycle

  13. Comparison of projection neurons in the pontine nuclei and the nucleus reticularis tegmenti pontis of the rat.

    PubMed

    Schwarz, C; Thier, P

    1996-12-16

    Dendritic features of identified projection neurons in two precerebellar nuclei, the pontine nuclei (PN) and the nucleus reticularis tegmenti pontis (NRTP) were established by using a combination of retrograde tracing (injection of fluorogold or rhodamine labelled latex micro-spheres into the cerebellum) with subsequent intracellular filling (lucifer yellow) in fixed slices of pontine brainstem. A multivariate analysis revealed that parameters selected to characterize the dendritic tree such as size of dendritic field, number of branching points, and length of terminal dendrites did not deviate significantly between different regions of the PN and the NRTP. On the other hand, projection neurons in ventral regions of the PN were characterized by an irregular coverage of their distal dendrites by appendages while those in the dorsal PN and the NRTP were virtually devoid of them. The NRTP, dorsal, and medial PN tended to display larger somata and more primary dendrites than ventral regions of the PN. These differences, however, do not allow the differentiation of projection neurons within the PN from those in the NRTP. They rather reflect a dorso-ventral gradient ignoring the border between the nuclei. Accordingly, a cluster analysis did not differentiate distinct types of projection neurons within the total sample. In both nuclei, multiple linear regression analysis revealed that the size of dendritic fields was strongly correlated with the length of terminal dendrites while it did not depend on other parameters of the dendritic field. Thus, larger dendritic fields seem not to be accompanied by a higher complexity but rather may be used to extend the reach of a projection neuron within the arrangement of afferent terminals. We suggest that these similarities within dendritic properties in PN and NRTP projection neurons reflect similar processing of afferent information in both precerebellar nuclei.

  14. Correlations between topography and intraflow width behavior in Martian and terrestrial lava flows

    NASA Astrophysics Data System (ADS)

    Peitersen, Matthew N.; Crown, David A.

    2000-02-01

    Local correlations between topography and width behavior within lava flows at Puu Oo, Mount Etna, Glass Mountain, Cerro Bayo, Alba Patera, Tyrrhena Patera, Elysium Mons, and Olympus Mons were investigated. For each flow, width and slope data were both referenced via downflow distance as a sequence of points; the data were then divided into collections of adjacent three-point features and two-point segments. Four discrete types of analyses were conducted: (1) Three-point analysis examined positional correlations between width and slope features, (2) two-point analysis did the same for flow segments, (3) mean slope analysis included segment slope comparisons, and (4) sudden width behavior analysis measured abruptness of width changes. The distribution of types of correlations compared to random combinations of features and segments does not suggest a significant correlation between flow widths and local underlying slopes and indicates that for these flows at least, other factors have more influence on changes in width than changes in underlying topography. Mean slopes underlying narrowing, widening, and constant flow width segments were calculated. An inverse correlation between slope and width was found only at Mount Etna, where slopes underlying narrowing segments were greater than those underlying widening in 62% of the examined flows. For the majority of flows at Mount Etna, Puu Oo, and Olympus Mons, slopes were actually greatest under constant width segments; this may imply a topographically dependent resistance to width changes. The rate of change of width was also examined. Sudden width changes are relatively common at Puu Oo, Mount Etna, Elysium Mons, and Tyrrhena Patera and relatively rare at Glass Mountain, Cerro Bayo, Olympus Mons, and Alba Patera. After correction for mapping scale, Puu Oo, Mount Etna, Olympus Mons, and Alba Patera appear to fall on the same trend; Glass Mount exhibits unusually small amounts of sudden width behavior, and Tyrrhena Patera exhibits a relatively large number of sudden width behavior occurrences.

  15. Structures data collection for The National Map using volunteered geographic information

    USGS Publications Warehouse

    Poore, Barbara S.; Wolf, Eric B.; Korris, Erin M.; Walter, Jennifer L.; Matthews, Greg D.

    2012-01-01

    The U.S. Geological Survey (USGS) has historically sponsored volunteered data collection projects to enhance its topographic paper and digital map products. This report describes one phase of an ongoing project to encourage volunteers to contribute data to The National Map using online editing tools. The USGS recruited students studying geographic information systems (GIS) at the University of Colorado Denver and the University of Denver in the spring of 2011 to add data on structures - manmade features such as schools, hospitals, and libraries - to four quadrangles covering metropolitan Denver. The USGS customized a version of the online Potlatch editor created by the OpenStreetMap project and populated it with 30 structure types drawn from the Geographic Names Information System (GNIS), a USGS database of geographic features. The students corrected the location and attributes of these points and added information on structures that were missing. There were two rounds of quality control. Student volunteers reviewed each point, and an in-house review of each point by the USGS followed. Nine-hundred and thirty-eight structure points were initially downloaded from the USGS database. Editing and quality control resulted in 1,214 structure points that were subsequently added to The National Map. A post-project analysis of the data shows that after student edit and peer review, 92 percent of the points contributed by volunteers met National Map Accuracy Standards for horizontal accuracy. Lessons from this project will be applied to later phases. These include: simplifying editing tasks and the user interfaces, stressing to volunteers the importance of adding structures that are missing, and emphasizing the importance of conforming to editorial guidelines for formatting names and addresses of structures. The next phase of the project will encompass the entire State of Colorado and will allow any citizen to contribute structures data. Volunteers will benefit from this project by engaging with their local geography and contributing to a national resource of topographic information that remains in the public domain for anyone to download.

  16. Working Around Cosmic Variance: Remote Quadrupole Measurements of the CMB

    NASA Astrophysics Data System (ADS)

    Adil, Arsalan; Bunn, Emory

    2018-01-01

    Anisotropies in the CMB maps continue to revolutionize our understanding of the Cosmos. However, the statistical interpretation of these anisotropies is tainted with a posteriori statistics. The problem is particularly emphasized for lower order multipoles, i.e. in the cosmic variance regime of the power spectrum. Naturally, the solution lies in acquiring a new data set – a rather difficult task given the sample size of the Universe.The CMB temperature, in theory, depends on: the direction of photon propagation, the time at which the photons are observed, and the observer’s location in space. In existing CMB data, only the first parameter varies. However, as first pointed out by Kamionkowski and Loeb, a solution lies in making the so-called “Remote Quadrupole Measurements” by analyzing the secondary polarization produced by incoming CMB photons via the Sunyaev-Zel’dovich (SZ) effect. These observations allow us to measure the projected CMB quadrupole at the location and look-back time of a galaxy cluster.At low redshifts, the remote quadrupole is strongly correlated to the CMB anisotropy from our last scattering surface. We provide here a formalism for computing the covariance and relation matrices for both the two-point correlation function on the last scattering surface of a galaxy cluster and the cross correlation of the remote quadrupole with the local CMB. We then calculate these matrices based on a fiducial model and a non-standard model that suppresses power at large angles for ~104 clusters up to z=2. We anticipate to make a priori predictions of the differences between our expectations for the standard and non-standard models. Such an analysis is timely in the wake of the CMB S4 era which will provide us with an extensive SZ cluster catalogue.

  17. Locating the quantum critical point of the Bose-Hubbard model through singularities of simple observables.

    PubMed

    Łącki, Mateusz; Damski, Bogdan; Zakrzewski, Jakub

    2016-12-02

    We show that the critical point of the two-dimensional Bose-Hubbard model can be easily found through studies of either on-site atom number fluctuations or the nearest-neighbor two-point correlation function (the expectation value of the tunnelling operator). Our strategy to locate the critical point is based on the observation that the derivatives of these observables with respect to the parameter that drives the superfluid-Mott insulator transition are singular at the critical point in the thermodynamic limit. Performing the quantum Monte Carlo simulations of the two-dimensional Bose-Hubbard model, we show that this technique leads to the accurate determination of the position of its critical point. Our results can be easily extended to the three-dimensional Bose-Hubbard model and different Hubbard-like models. They provide a simple experimentally-relevant way of locating critical points in various cold atomic lattice systems.

  18. Natural occupation numbers in two-electron quantum rings.

    PubMed

    Tognetti, Vincent; Loos, Pierre-François

    2016-02-07

    Natural orbitals (NOs) are central constituents for evaluating correlation energies through efficient approximations. Here, we report the closed-form expression of the NOs of two-electron quantum rings, which are prototypical finite-extension systems and new starting points for the development of exchange-correlation functionals in density functional theory. We also show that the natural occupation numbers for these two-electron paradigms are in general non-vanishing and follow the same power law decay as atomic and molecular two-electron systems.

  19. Natural occupation numbers in two-electron quantum rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tognetti, Vincent, E-mail: vincent.tognetti@univ-rouen.fr; Loos, Pierre-François

    2016-02-07

    Natural orbitals (NOs) are central constituents for evaluating correlation energies through efficient approximations. Here, we report the closed-form expression of the NOs of two-electron quantum rings, which are prototypical finite-extension systems and new starting points for the development of exchange-correlation functionals in density functional theory. We also show that the natural occupation numbers for these two-electron paradigms are in general non-vanishing and follow the same power law decay as atomic and molecular two-electron systems.

  20. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  1. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  2. Controlling the loss of quantum correlations via quantum memory channels

    NASA Astrophysics Data System (ADS)

    Duran, Durgun; Verçin, Abdullah

    2018-07-01

    A generic behavior of quantum correlations during any quantum process taking place in a noisy environment is that they are non-increasing. We have shown that mitigation of these decreases providing relative enhancements in correlations is possible by means of quantum memory channels which model correlated environmental quantum noises. For two-qubit systems subject to mixtures of two-use actions of different decoherence channels we point out that improvement in correlations can be achieved in such way that the input-output fidelity is also as high as possible. These make it possible to create the optimal conditions in realizing any quantum communication task in a noisy environment.

  3. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  4. Correlation and 3D-tracking of objects by pointing sensors

    DOEpatents

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  5. On non-primitively divergent vertices of Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Huber, Markus Q.

    2017-11-01

    Two correlation functions of Yang-Mills beyond the primitively divergent ones, the two-ghost-two-gluon and the four-ghost vertices, are calculated and their influence on lower vertices is examined. Their full (transverse) tensor structure is taken into account. As input, a solution of the full two-point equations - including two-loop terms - is used that respects the resummed perturbative ultraviolet behavior. A clear hierarchy is found with regard to the color structure that reduces the number of relevant dressing functions. The impact of the two-ghost-two-gluon vertex on the three-gluon vertex is negligible, which is explained by the fact that all non-small dressing functions drop out due to their color factors. Only in the ghost-gluon vertex a small net effect below 2% is seen. The four-ghost vertex is found to be extremely small in general. Since these two four-point functions do not enter into the propagator equations, these findings establish their small overall effect on lower correlation functions.

  6. Test-retest reliability of the irrational performance beliefs inventory.

    PubMed

    Turner, M J; Slater, M J; Dixon, J; Miller, A

    2018-02-01

    The irrational performance beliefs inventory (iPBI) was developed to measure irrational beliefs within performance domains such as sport, academia, business, and the military. Past research indicates that the iPBI has good construct, concurrent, and predictive validity, but the test-retest reliability of the iPBI has not yet been examined. Therefore, in the present study the iPBI was administered to university sport and exercise students (n = 160) and academy soccer athletes (n = 75) at three-time points. Time point two occurred 7 days after time point one, and time point three occurred 21 days after time point two. In addition, social desirability was also measured. Repeated-measures MANCOVAs, intra-class coefficients, and Pearson's (r) correlations demonstrate that the iPBI has good test-retest reliability, with iPBI scores remaining stable across the three-time points. Pearson's correlation coefficients revealed no relationships between the iPBI and social desirability, indicating that the iPBI is not highly susceptible to response bias. The results are discussed with reference to the continued usage and development of the iPBI, and future research recommendations relating to the investigation of irrational performance beliefs are proposed.

  7. Preparing for Workplace Numeracy: A Modelling Perspective

    ERIC Educational Resources Information Center

    Wake, Geoff

    2015-01-01

    The starting point of this article is the question, "how might we inform an epistemology of numeracy from the point of view of better preparing young people for workplace competence?" To inform thinking illustrative data from two projects that researched into mathematics in workplace activity and the teaching and learning of modelling in…

  8. Ensemble Space-Time Correlation of Plasma Turbulence in the Solar Wind.

    PubMed

    Matthaeus, W H; Weygand, J M; Dasso, S

    2016-06-17

    Single point measurement turbulence cannot distinguish variations in space and time. We employ an ensemble of one- and two-point measurements in the solar wind to estimate the space-time correlation function in the comoving plasma frame. The method is illustrated using near Earth spacecraft observations, employing ACE, Geotail, IMP-8, and Wind data sets. New results include an evaluation of both correlation time and correlation length from a single method, and a new assessment of the accuracy of the familiar frozen-in flow approximation. This novel view of the space-time structure of turbulence may prove essential in exploratory space missions such as Solar Probe Plus and Solar Orbiter for which the frozen-in flow hypothesis may not be a useful approximation.

  9. Phase transition in 2-d system of quadrupoles on square lattice with anisotropic field

    NASA Astrophysics Data System (ADS)

    Sallabi, A. K.; Alkhttab, M.

    2014-12-01

    Monte Carlo method is used to study a simple model of two-dimensional interacting quadrupoles on ionic square lattice with anisotropic strength provided by the ionic lattice. Order parameter, susceptibility and correlation function data, show that this system form an ordered structure with p(2×1) symmetry at low temperature. The p(2×1) structure undergoes an order-disorder phase transition into disordered (1×1) phase at 8.3K. The two-point correlation function show exponential dependence on distance both above and below the transition temperature. At Tc the two-point correlation function shows a power law dependence on distance, e.g. C(r) ~ 1η. The value of the exponent η at Tc shows small deviation from the Ising value and indicates that this system falls into the same universality class as the XY model with cubic anisotropy. This model can be applied to prototypical quadrupoles physisorbed systems as N2 on NaCl(100).

  10. Colour-dressed hexagon tessellations for correlation functions and non-planar corrections

    NASA Astrophysics Data System (ADS)

    Eden, Burkhard; Jiang, Yunfeng; le Plat, Dennis; Sfondrini, Alessandro

    2018-02-01

    We continue the study of four-point correlation functions by the hexagon tessellation approach initiated in [38] and [39]. We consider planar tree-level correlation functions in N=4 supersymmetric Yang-Mills theory involving two non-protected operators. We find that, in order to reproduce the field theory result, it is necessary to include SU( N) colour factors in the hexagon formalism; moreover, we find that the hexagon approach as it stands is naturally tailored to the single-trace part of correlation functions, and does not account for multi-trace admixtures. We discuss how to compute correlators involving double-trace operators, as well as more general 1 /N effects; in particular we compute the whole next-to-leading order in the large- N expansion of tree-level BMN two-point functions by tessellating a torus with punctures. Finally, we turn to the issue of "wrapping", Lüscher-like corrections. We show that SU( N) colour-dressing reproduces an earlier empirical rule for incorporating single-magnon wrapping, and we provide a direct interpretation of such wrapping processes in terms of N=2 supersymmetric Feynman diagrams.

  11. The Oxfordshire Community Stroke Project classification: correlation with imaging, associated complications, and prediction of outcome in acute ischemic stroke.

    PubMed

    Pittock, Sean J; Meldrum, Dara; Hardiman, Orla; Thornton, John; Brennan, Paul; Moroney, Joan T

    2003-01-01

    This preliminary study investigates the risk factor profile, post stroke complications, and outcome for four OCSP (Oxfordshire Community Stroke Project Classification) subtypes. One hundred seventeen consecutive ischemic stroke patients were clinically classified into 1 of 4 subtypes: total anterior (TACI), partial anterior (PACI), lacunar (LACI), and posterior (POCI) circulation infarcts. Study evaluations were performed at admission, 2 weeks, and 6 months. There was a good correlation between clinical classification and radiological diagnosis if a negative CT head was considered consistent with a lacunar infarction. No significant difference in risk factor profile was observed between subtypes. The TACI group had significantly higher mortality (P < .001), morbidity (P < .001, as per disability scales), length of hospital stay (P < .001), and complications (respiratory tract infection and seizures [P < .01]) as compared to the other three groups which were all similar at the different time points. The only significant difference found was the higher rate of stroke recurrence within the first 6 months in the POCI group (P < .001). The OCSP classification identifies two major groups (TACI and other 3 groups combined) who behave differently with respect to post stroke outcome. Further study with larger numbers of patients and thus greater power will be required to allow better discrimination of OCSP subtypes in respect of risk factors, complications, and outcomes if the OCSP is to be used to stratify patients in clinical trials.

  12. Characterization of topological phases of dimerized Kitaev chain via edge correlation functions

    NASA Astrophysics Data System (ADS)

    Wang, Yucheng; Miao, Jian-Jian; Jin, Hui-Ke; Chen, Shu

    2017-11-01

    We study analytically topological properties of a noninteracting modified dimerized Kitaev chain and an exactly solvable interacting dimerized Kitaev chain under open boundary conditions by analyzing two introduced edge correlation functions. The interacting dimerized Kitaev chain at the symmetry point Δ =t and the chemical potential μ =0 can be exactly solved by applying two Jordan-Wigner transformations and a spin rotation, which permits us to calculate the edge correlation functions analytically. We demonstrate that the two edge correlation functions can be used to characterize the trivial, Su-Schrieffer-Heeger-like topological and topological superconductor phases of both the noninteracting and interacting systems and give their phase diagrams.

  13. Potency backprojection

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.

  14. What can we learn from the dynamics of entanglement and quantum discord in the Tavis-Cummings model?

    NASA Astrophysics Data System (ADS)

    Restrepo, Juliana; Rodriguez, Boris A.

    We revisit the problem of the dynamics of quantum correlations in the exact Tavis-Cummings model. We show that many of the dynamical features of quantum discord attributed to dissipation are already present in the exact framework and are due to the well known non-linearities in the model and to the choice of initial conditions. Through a comprehensive analysis, supported by explicit analytical calculations, we find that the dynamics of entanglement and quantum discord are far from being trivial or intuitive. In this context, we find states that are indistinguishable from the point of view of entanglement and distinguishable from the point of view of quantum discord, states where the two quantifiers give opposite information and states where they give roughly the same information about correlations at a certain time. Depending on the initial conditions, this model exhibits a fascinating range of phenomena that can be used for experimental purposes such as: Robust states against change of manifold or dissipation, tunable entanglement states and states with a counterintuitive sudden birth as the number of photons increase. We furthermore propose an experiment called quantum discord gates where discord is zero or non-zero depending on the number of photons. This work was supported by the Vicerrectoria de Investigacion of the Universidad Antonio Narino, Colombia under Project Number 20141031 and by the Departamento Administrativo de Ciencia, Tecnologia e Innovacion (COLCIENCIAS) of Colombia under Grant Number.

  15. Modeling of turbulent transport as a volume process

    NASA Technical Reports Server (NTRS)

    Jennings, Mark J.; Morel, Thomas

    1987-01-01

    An alternative type of modeling was proposed for the turbulent transport terms in Reynolds-averaged equations. One particular implementation of the model was considered, based on the two-point velocity correlations. The model was found to reproduce the trends but not the magnitude of the nonisotropic behavior of the turbulent transport. Some interesting insights were developed concerning the shape of the contracted two-point correlation volume. This volume is strongly deformed by mean shear from the spherical shape found in unstrained flows. Of particular interest is the finding that the shape is sharply waisted, indicating preferential lines of communication, which should have a direct effect on turbulent transfer and on other processes.

  16. Nucleon Axial and Electromagnetic Form Factors

    NASA Astrophysics Data System (ADS)

    Jang, Yong-Chull; Bhattacharya, Tanmoy; Gupta, Rajan; Lin, Huey-Wen; Yoon, Boram

    2018-03-01

    We present results for the isovector axial, induced pseudoscalar, electric, and magnetic form factors of the nucleon. The calculations were done using 2 + 1 + 1-flavor HISQ ensembles generated by the MILC collaboration with lattice spacings a ≈ 0.12, 0.09, 0.06 fm and pion masses Mπ ≈ 310, 220, 130 MeV. Excited-states contamination is controlled by using four-state fits to two-point correlators and by comparing two-versus three-states in three-point correlators. The Q2 behavior is analyzed using the model independent z-expansion and the dipole ansatz. Final results for the charge radii and magnetic moment are obtained using a simultaneous fit in Mπ, lattice spacing a and finite volume.

  17. Modulational Instability of Cylindrical and Spherical NLS Equations. Statistical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grecu, A. T.; Grecu, D.; Visinescu, Anca

    2010-01-21

    The modulational (Benjamin-Feir) instability for cylindrical and spherical NLS equations (c/s NLS equations) is studied using a statistical approach (SAMI). A kinetic equation for a two-point correlation function is written and analyzed using the Wigner-Moyal transform. The linear stability of the Fourier transform of the two-point correlation function is studied and an implicit integral form for the dispersion relation is found. This is solved for different expressions of the initial spectrum (delta-spectrum, Lorentzian, Gaussian), and in the case of a Lorentzian spectrum the total growth of the instability is calculated. The similarities and differences with the usual one-dimensional NLS equationmore » are emphasized.« less

  18. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  19. Projection matrix acquisition for cone-beam computed tomography iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao

    2017-02-01

    Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.

  20. Role of the charge state of interface defects in electronic inhomogeneity evolution with gate voltage in graphene

    NASA Astrophysics Data System (ADS)

    Singh, Anil Kumar; Gupta, Anjan K.

    2018-05-01

    Evolution of electronic inhomogeneities with back-gate voltage in graphene on SiO2 was studied using room temperature scanning tunneling microscopy and spectroscopy. Reversal of contrast in some places in the conductance maps and sharp changes in cross correlations between topographic and conductance maps, when graphene Fermi energy approaches its Dirac point, are attributed to the change in charge state of interface defects. The spatial correlations in the conductance maps, described by two length scales, and their growth during approach to Dirac point, show a qualitative agreement with the predictions of the screening theory of graphene. Thus a sharp change in the two length scales close to the Dirac point, seen in our experiments, is interpreted in terms of the change in charge state of some of the interface defects. A systematic understanding and control of the charge state of defects can help in memory applications of graphene.

  1. An experimental investigation of the separating/reattaching flow over a backstep

    NASA Technical Reports Server (NTRS)

    Jovic, Srboljub

    1993-01-01

    This progress report covers the grant period from March until the end of January 1993. Extensive data reduction and analysis of single and two-point measurements for a backward-facing experiment were performed. Pertinent results are presented in two conference papers which are appended to this report. The titles of the papers are as follows: (1) 'Two-point correlation measurements in a recovering turbulent boundary layer'; and (2) 'An experimental study on the recovery of a turbulent boundary layer downstream of the reattachment'.

  2. Cluster Analysis and Gaussian Mixture Estimation of Correlated Time-Series by Means of Multi-dimensional Scaling

    NASA Astrophysics Data System (ADS)

    Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi

    We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.

  3. Neural field theory of perceptual echo and implications for estimating brain connectivity

    NASA Astrophysics Data System (ADS)

    Robinson, P. A.; Pagès, J. C.; Gabay, N. C.; Babaie, T.; Mukta, K. N.

    2018-04-01

    Neural field theory is used to predict and analyze the phenomenon of perceptual echo in which random input stimuli at one location are correlated with electroencephalographic responses at other locations. It is shown that this echo correlation (EC) yields an estimate of the transfer function from the stimulated point to other locations. Modal analysis then explains the observed spatiotemporal structure of visually driven EC and the dominance of the alpha frequency; two eigenmodes of similar amplitude dominate the response, leading to temporal beating and a line of low correlation that runs from the crown of the head toward the ears. These effects result from mode splitting and symmetry breaking caused by interhemispheric coupling and cortical folding. It is shown how eigenmodes obtained from functional magnetic resonance imaging experiments can be combined with temporal dynamics from EC or other evoked responses to estimate the spatiotemporal transfer function between any two points and hence their effective connectivity.

  4. Gaussian statistics of the cosmic microwave background: Correlation of temperature extrema in the COBE DMR two-year sky maps

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.

    1995-01-01

    We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.

  5. Classification and identification of reading and math disabilities: the special case of comorbidity.

    PubMed

    Branum-Martin, Lee; Fletcher, Jack M; Stuebing, Karla K

    2013-01-01

    Much of learning disabilities research relies on categorical classification frameworks that use psychometric tests and cut points to identify children with reading or math difficulties. However, there is increasing evidence that the attributes of reading and math learning disabilities are dimensional, representing correlated continua of severity. We discuss issues related to categorical and dimensional approaches to reading and math disabilities, and their comorbid associations, highlighting problems with the use of cut points and correlated assessments. Two simulations are provided in which the correlational structure of a set of cognitive and achievement data are simulated from a single population with no categorical structures. The simulations produce profiles remarkably similar to reported profile differences, suggesting that the patterns are a product of the cut point and the correlational structure of the data. If dimensional approaches better fit the attributes of learning disability, new conceptualizations and better methods to identification and intervention may emerge, especially for comorbid associations of reading and math difficulties.

  6. Medical student psychological distress and academic performance.

    PubMed

    Dendle, Claire; Baulch, Julie; Pellicano, Rebecca; Hay, Margaret; Lichtwark, Irene; Ayoub, Sally; Clarke, David M; Morand, Eric F; Kumar, Arunaz; Leech, Michelle; Horne, Kylie

    2018-01-21

    The impact of medical student psychological distress on academic performance has not been systematically examined. This study provided an opportunity to closely examine the potential impacts of workplace and study related stress factors on student's psychological distress and their academic performance during their first clinical year. This one-year prospective cohort study was performed at a tertiary hospital based medical school in Melbourne, Australia. Students completed a questionnaire at three time points during the year. The questionnaire included the validated Kessler psychological distress scale (K10) and the General Health Questionnaire-28 (GHQ-28), as well as items about sources of workplace stress. Academic outcome scores were aggregated and correlated with questionnaire results. One hundred and twenty six students participated; 126 (94.7%), 102 (76.7%), and 99 (74.4%) at time points one, two, and three, respectively. 33.1% reported psychological distress at time point one, increasing to 47.4% at time point three. There was no correlation between the K10 scores and academic performance. There was weak negative correlation between the GHQ-28 at time point three and academic performance. Keeping up to date with knowledge, need to do well and fear of negative feedback were the most common workplace stress factors. Poor correlation was noted between psychological distress and academic performance.

  7. Inside and outside: Boxes Inspired by Joseph Cornell

    ERIC Educational Resources Information Center

    Winters, Laurel

    2009-01-01

    In this article, the author describes an art project inspired by the work of Joseph Cornell. The project called for designing both the outside and the inside of a cigar box according to the student's theme. Thus, students needed to consider the viewer's vantage point with the box both closed and open, general design elements, two-dimensional and…

  8. Using Capstones to Develop Research Skills and Graduate Capabilities: A Case Study from Physiology

    ERIC Educational Resources Information Center

    Julien, Brianna L.; Lexis, Louise; Schuijers, Johannes; Samiric, Tom; McDonald, Stuart

    2012-01-01

    In 2011, the Department of Human Biosciences introduced two physiology capstone subjects as part of the Design for Learning Project at La Trobe University. Consistent with the project, the aims of these subjects were to provide an effective culmination point for the Bachelor of Health Science course and to offer students orientation to…

  9. The New Hampshire High School Career Education Model. Final Report.

    ERIC Educational Resources Information Center

    Keene State Coll., NH.

    The purpose of this project was to improve the quality and demonstrate the most effective methods and techniques of career education in four high schools in the state of New Hampshire. The focus was to effect change at two points: the first was the academic curriculum, where committees in each of the project schools reviewed their existing…

  10. Reconstruction method for fringe projection profilometry based on light beams.

    PubMed

    Li, Xuexing; Zhang, Zhijiang; Yang, Chen

    2016-12-01

    A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.

  11. SU-E-T-72: A Retrospective Correlation Analysis On Dose-Volume Control Points and Treatment Outcomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, A; Nohadani, O; Refaat, T

    2015-06-15

    Purpose: To quantify correlation between dose-volume control points and treatment outcomes. Specifically, two outcomes are analyzed: occurrence of radiation induced dysphagia and target complications. The results inform the treatment planning process when competing dose-volume criteria requires relaxations. Methods: 32 patients, treated with whole-field sequential intensity modulated radiation therapy during 2009–2010 period, are considered for this study. Acute dysphagia that is categorized into 3 grades is observed on all patients. 3 patients are observed in grade 1, 17 patients in grade 2, and 12 patients in grade 3. Ordinal logistic regression is employed to establish correlations between grades of dysphagia andmore » dose to cervico-thoracic esophagus. Particularly, minimum (Dmin), mean (Dmean), and maximum (Dmax) dose control points are analyzed. Additionally, target complication, which includes local-regional recurrence and/or distant metastasis, is observed on 4 patients. Binary logistic regression is used to quantify correlation between target complication and four dose control points. Namely, ICRU recommended dose control points, D2, D50, D95, and D98 are analyzed. Results: For correlation with dysphagia, Dmin on cervico-thoracic esophagus is statistically significant (p-value = 0.005). Additionally, Dmean on cervico-thoracic esophagus is also significant in association with dysphagia (p-value = 0.012). However, no correlation was observed between Dmax and dysphagia (p-value = 0.263). For target complications, D50 on the target is a statistically significant dose control point (p-value = 0.032). No correlations were observed between treatment complications and D2 (p-value = 0.866), D95 (p-value = 0.750), and D98 (p-value = 0.710) on the target. Conclusion: Significant correlations are observed between radiation induced dysphagia and Dmean (and Dmin) to cervico-thoracic esophagus. Additionally, correlation between target complications and median dose to target (D50) is observed. Quantification of these correlations can inform treatment planners when any competing objectives requires relaxation of target D50 or Dmean (or Dmin) to cervico-thoracic esophagus.« less

  12. Regional moisture balance control of landslide motion: implications for landslide forecasting in a changing climate

    USGS Publications Warehouse

    Coe, Jeffrey A.

    2012-01-01

    I correlated 12 years of annual movement of 18 points on a large, continuously moving, deep-seated landslide with a regional moisture balance index (moisture balance drought index, MBDI). I used MBDI values calculated from a combination of historical precipitation and air temperature data from A.D. 1895 to 2010, and downscaled climate projections using the Intergovernmental Panel on Climate Change A2 emissions scenario for 2011–2099. At the landslide, temperature is projected to increase ~0.5 °C/10 yr between 2011 and 2099, while precipitation decreases at a rate of ~2 mm/10 yr. Landslide movement correlated with the MBDI with integration periods of 12 and 48 months. The correlation between movement and MBDI suggests that the MBDI functions as a proxy for groundwater pore pressures and landslide mobility. I used the correlation to forecast decreasing landslide movement between 2011 and 2099, with the head of the landslide expected to stop moving in the mid-21st century. The MBDI, or a similar moisture balance index that accounts for evapotranspiration, has considerable potential as a tool for forecasting the magnitude of ongoing deep-seated landslide movement, and for assessing the onset or likelihood of regional, deep-seated landslide activity.

  13. Feel like you belong: on the bidirectional link between emotional fit and group identification in task groups

    PubMed Central

    Delvaux, Ellen; Meeussen, Loes; Mesquita, Batja

    2015-01-01

    Three studies investigated the association between members’ group identification and the emotional fit with their group. In the first study, a cross-sectional study in a large organization, we replicated earlier research by showing that group identification and emotional fit are positively associated, using a broader range of emotions and using profile correlations to measure group members’ emotional fit. In addition, in two longitudinal studies, where groups of students were followed at several time points during their collaboration on a project, we tested the directionality of the relationship between group identification and emotional fit. The results showed a bidirectional, positive link between group identification and emotional fit, such that group identification and emotional fit either mutually reinforce or mutually dampen each other over time. We discuss how these findings increase insights in group functioning and how they may be used to change group processes for better or worse. PMID:26300806

  14. A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR.

    PubMed

    Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei

    2016-02-26

    Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method.

  15. A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR

    PubMed Central

    Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei

    2016-01-01

    Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method. PMID:26927117

  16. Atlas of point correlations at 30 mb and between 500 and 30 mb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loon, H. van; Shea, D.J.

    1992-12-01

    The National Center for Atmospheric Research has issued a technical note (Shea et al. 1992) with point correlations (teleconnections) on the 30-mb surface and between 500 and 30 mb. The correlations are for the two-month means January-February, March-April, July-August, and November-December, which were chosen because of these characteristics of the intraseasonal change in the stratosphere: (a) At 30 mb the annual cooling at higher latitudes often ends in December, which therefore is on an average-the coldest month at these latitudes. (b) Major midwinter warnings, during which the polar low is replaced by a high, nearly all occur in January-February. (c)more » The final (spring) warming of the lower stratosphere takes place in March-April. The correlations are based on two well-known datasets: the monthly mean temperatures and geopotential heights at 30 mb, derived from the daily historical maps from the Stratospheric Research Group, 11 monthly mean geopotential heights at 500 mb from the National Meteorological Center, Washington, D.C. The 500- and 30-mb heights span the years 1957-1988, and the 30-mb temperatures, the years 1964-1988. The teleconnection maps cover the region between 15[degrees]N and the North Pole. Correlations were also computed for the two halves of the period to spot any differences between them, but there were only minor or no differences. Examples of the point correlations are described below to indicate the type of material available in the technical note. The 5% local significance level for a sample of 31 is r= 0.36, and for n = 16 it is r= 0.52. The January 30-mb mean map should be used as a reference for the correlations. The technical note is available free of charge from NCAR, Information and Education Outreach Program, P.O. Box 3000, Boulder, CO 80307.« less

  17. Advanced MRI in Acute Military TBI

    DTIC Science & Technology

    2015-11-01

    advanced MRI methods, DTI and resting-state fMRI correlation analysis, in military TBI patients acutely after injury and correlate findings with TBI...14 4 Introduction The objective of the project was to test two advanced MRI methods, DTI and resting-state fMRI correlation analysis, in...of Concussion Exam (MACE )(44) were reviewed. This brief cognitive test 279 assesses orientation, immediate verbal memory , concentration, and short

  18. Coping strategies among patients with newly diagnosed amyotrophic lateral sclerosis.

    PubMed

    Jakobsson Larsson, Birgitta; Nordin, Karin; Askmark, Håkan; Nygren, Ingela

    2014-11-01

    To prospectively identify different coping strategies among newly diagnosed amyotrophic lateral sclerosis patients and whether they change over time and to determine whether physical function, psychological well-being, age and gender correlated with the use of different coping strategies. Amyotrophic lateral sclerosis is a fatal disease with impact on both physical function and psychological well-being. Different coping strategies are used to manage symptoms and disease progression, but knowledge about coping in newly diagnosed amyotrophic lateral sclerosis patients is scarce. This was a prospective study with a longitudinal and descriptive design. A total of 33 patients were included and evaluation was made at two time points, one to three months and six months after diagnosis. Patients were asked to complete the Motor Neuron Disease Coping Scale and the Hospital Anxiety and Depression Scale. Physical function was estimated using the revised Amyotrophic Lateral Sclerosis Functional Rating Scale. The most commonly used strategies were support and independence. Avoidance/venting and information seeking were seldom used at both time points. The use of information seeking decreased between the two time points. Men did not differ from women, but patients ≤64 years used positive action more often than older patients. Amyotrophic Lateral Sclerosis Functional Rating Scale was positively correlated with positive action at time point 1, but not at time point 2. Patients' psychological well-being was correlated with the use of different coping strategies. Support and independence were the most used coping strategies, and the use of different strategies changed over time. Psychological well-being was correlated with different coping strategies in newly diagnosed amyotrophic lateral sclerosis patients. The knowledge about coping strategies in early stage of the disease may help the nurses to improve and develop the care and support for these patients. © 2014 John Wiley & Sons Ltd.

  19. Exploring the squeezed three-point galaxy correlation function with generalized halo occupation distribution models

    NASA Astrophysics Data System (ADS)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2018-04-01

    We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.

  20. Complexes and saddle point structures, vibrational frequencies and relative energies of intermediates for CH2Br + HBr «-» CH3Br + Br

    NASA Astrophysics Data System (ADS)

    Espinosa-Garcia, J.

    Ab initio molecular orbital theory was used to study parts of the reaction between the CH2Br radical and the HBr molecule, and two possibilities were analysed: attack on the hydrogen and attack on the bromine of the HBr molecule. Optimized geometries and harmonic vibrational frequencies were calculated at the second-order Moller-Plesset perturbation theory levels, and comparison with available experimental data was favourable. Then single-point calculations were performed at several higher levels of calculation. In the attack on the hydrogen of HBr, two stationary points were located on the direct hydrogen abstraction reaction path: a very weak hydrogen bonded complex of reactants, C···HBr, close to the reactants, followed by the saddle point (SP). The effects of level of calculation (method + basis set), spin projection, zeropoint energy, thermal corrections (298K), spin-orbit coupling and basis set superposition error (BSSE) on the energy changes were analysed. Taking the reaction enthalpy (298K) as reference, agreement with experiment was obtained only when high correlation energy and large basis sets were used. It was concluded that at room temperature (i.e., with zero-point energy and thermal corrections), when the BSSE was included, the complex disappears and the activation enthalpy (298K) ranges from 0.8kcal mol-1 to 1.4kcal mol-1 above the reactants, depending on the level of calculation. It was concluded also that this result is the balance of a complicated interplay of many factors, which are affected by uncertainties in the theoretical calculations. Finally, another possible complex (X complex), which involves the alkyl radical being attracted to the halogen end of HBr (C···BrH), was explored also. It was concluded that this X complex does not exist at room temperature.

  1. Investigation of Solar Wind Correlations and Solar Wind Modifications Near Earth by Multi-Spacecraft Observations: IMP 8, WIND and INTERBALL-1

    NASA Technical Reports Server (NTRS)

    Paularena, Karolen I.; Richardson, John D.; Zastenker, Georgy N.

    2002-01-01

    The foundation of this Project is use of the opportunity available during the ISTP (International Solar-Terrestrial Physics) era to compare solar wind measurements obtained simultaneously by three spacecraft - IMP 8, WIND and INTERBALL-1 at wide-separated points. Using these data allows us to study three important topics: (1) the size and dynamics of near-Earth mid-scale (with dimension about 1-10 million km) and small-scale (with dimension about 10-100 thousand km) solar wind structures; (2) the reliability of the common assumption that solar wind conditions at the upstream Lagrangian (L1) point accurately predict the conditions affecting Earth's magnetosphere; (3) modification of the solar wind plasma and magnetic field in the regions near the Earth magnetosphere, the foreshock and the magnetosheath. Our Project was dedicated to these problems. Our research has made substantial contributions to the field and has lead others to undertake similar work.

  2. Parametrization of semiempirical models against ab initio crystal data: evaluation of lattice energies of nitrate salts.

    PubMed

    Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav

    2005-09-01

    A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.

  3. COSMOS-e'-soft Higgsotic attractors

    NASA Astrophysics Data System (ADS)

    Choudhury, Sayantan

    2017-07-01

    In this work, we have developed an elegant algorithm to study the cosmological consequences from a huge class of quantum field theories (i.e. superstring theory, supergravity, extra dimensional theory, modified gravity, etc.), which are equivalently described by soft attractors in the effective field theory framework. In this description we have restricted our analysis for two scalar fields - dilaton and Higgsotic fields minimally coupled with Einstein gravity, which can be generalized for any arbitrary number of scalar field contents with generalized non-canonical and non-minimal interactions. We have explicitly used R^2 gravity, from which we have studied the attractor and non-attractor phases by exactly computing two point, three point and four point correlation functions from scalar fluctuations using the In-In (Schwinger-Keldysh) and the δ N formalisms. We have also presented theoretical bounds on the amplitude, tilt and running of the primordial power spectrum, various shapes (equilateral, squeezed, folded kite or counter-collinear) of the amplitude as obtained from three and four point scalar functions, which are consistent with observed data. Also the results from two point tensor fluctuations and the field excursion formula are explicitly presented for the attractor and non-attractor phase. Further, reheating constraints, scale dependent behavior of the couplings and the dynamical solution for the dilaton and Higgsotic fields are also presented. New sets of consistency relations between two, three and four point observables are also presented, which shows significant deviation from canonical slow-roll models. Additionally, three possible theoretical proposals have presented to overcome the tachyonic instability at the time of late time acceleration. Finally, we have also provided the bulk interpretation from the three and four point scalar correlation functions for completeness.

  4. Dynamics of quantum correlation and coherence for two atoms coupled with a bath of fluctuating massless scalar field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhiming, E-mail: 465609785@qq.com; Situ, Haozhen, E-mail: situhaozhen@gmail.com

    In this article, the dynamics of quantum correlation and coherence for two atoms interacting with a bath of fluctuating massless scalar field in the Minkowski vacuum is investigated. We firstly derive the master equation that describes the system evolution with initial Bell-diagonal state. Then we discuss the system evolution for three cases of different initial states: non-zero correlation separable state, maximally entangled state and zero correlation state. For non-zero correlation initial separable state, quantum correlation and coherence can be protected from vacuum fluctuations during long time evolution when the separation between the two atoms is relatively small. For maximally entangledmore » initial state, quantum correlation and coherence overall decrease with evolution time. However, for the zero correlation initial state, quantum correlation and coherence are firstly generated and then drop with evolution time; when separation is sufficiently small, they can survive from vacuum fluctuations. For three cases, quantum correlation and coherence first undergo decline and then fluctuate to relatively stable values with the increasing distance between the two atoms. Specially, for the case of zero correlation initial state, quantum correlation and coherence occur periodically revival at fixed zero points and revival amplitude declines gradually with increasing separation of two atoms.« less

  5. Interplay of Structure and Dynamics in Biomaterials

    NASA Astrophysics Data System (ADS)

    Vodnala, Preeti

    Study of structure and dynamic behavior is essential to understand molecular motions in biological systems. In this work, two biomaterials were studied to address membrane properties and protein diffusion. For the first project, we studied the structure of liposomes, artificial vesicles that are used for drug encapsulation and administration of pharmaceuticals or cellular nutrients. Small-angle x-ray scattering (SAXS) was used to determine the structural properties of different liposomes composed of egg-PC and cholesterol bilayer. We examined the location of cholesterol by labelling cholesterol with bromine molecule and reveal that cholesterol is located one side of the leaflet adjusting itself to the curvature of a liposome. In my second project, we studied the dynamics of concentrated suspensions of alpha crystallin, one of the most abundant proteins in the human eye lens using X-ray photon correlation spectroscopy (XPCS). An improved understanding of dynamics could point the way towards treatments presbyopia and cataract. The dynamics were measured at volume fraction close to the critical volume fraction for the glass transition, where the intermediate scattering function, ƒ(q,T) could be well fit using a double exponential decay. The measured relaxation is in reasonable agreement with published molecular dynamics simulations for the relaxation times of hard-sphere colloids.

  6. Dynamic and static structure studies of colloidal suspensions with XPCS, SAXS and XNFS

    NASA Astrophysics Data System (ADS)

    Lu, Xinhui

    In the first project, I studied the onset of structural arrest and glass formation in a suspension of silica nanoparticles in a water-lutidine binary mixture near its consolute point using X-ray Photon Correlation Spectroscopy (XPCS) and Small Angle X-ray Scattering (SAXS). I obtained the temperature evolution of the static and dynamic structure, revealing that glass transitions occur both on cooling and on heating, and an unusual logarithmic relaxation within the intermediate liquid between the two glasses, as predicted by mode-coupling theory. In another project, I implemented and exploited the recently-introduced, coherence-based technique of X-ray Near-Field Speckle (XNFS) to characterize the structure and dynamics of micrometer-sized particles. In XNFS, the measured speckles originate from the interference between the incident and scattered beams, and enable truly ultra-small angle x-ray scattering measurements with a simple setup. We built a micrometer-resolution XNFS detector with a high numerical aperture microscope objective and demonstrated its capability of studying static structures and dynamics in longer length scale than traditional far field x-ray techniques by measuring dilute silica and polystyrene samples. We also discussed the limitation of this technique.

  7. Forensic facial reconstruction: Nasal projection in Brazilian adults.

    PubMed

    Tedeschi-Oliveira, Silvia Virginia; Beaini, Thiago Leite; Melani, Rodolfo Francisco Haltenhoff

    2016-09-01

    The nose has a marked cognitive influence on facial image; however, it loses its shape during cadaveric decomposition. The known methods of estimating nasal projection using Facial Reconstruction are lacking in practicality and reproducibility. We attempted to relate the points Rhinion, Pronasale and Prosthion by studying the angle formed by straight lines that connect them. Two examiners measured this angle with the help of analysis and image-processing software, Image J, directly from cephalometric radiographs. The sample consisted of 300 males, aged between 24 and 77 years, and 300 females, aged 24 to 69 years. The proposed angle ranged from 80° to 100° in both sexes and all ages. It was considered possible to use a 90° angle from projections of the Rhinion and Prosthion points in order to determine the Pronasale position, as well as to estimate the nasal projection of Brazilian adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Ising tricriticality in the extended Hubbard model with bond dimerization

    NASA Astrophysics Data System (ADS)

    Fehske, Holger; Ejima, Satoshi; Lange, Florian; Essler, Fabian H. L.

    We explore the quantum phase transition between Peierls and charge-density-wave insulating states in the one-dimensional, half-filled, extended Hubbard model with explicit bond dimerization. We show that the critical line of the continuous Ising transition terminates at a tricritical point, belonging to the universality class of the tricritical Ising model with central charge c=7/10. Above this point, the quantum phase transition becomes first order. Employing a numerical matrix-product-state based (infinite) density-matrix renormalization group method we determine the ground-state phase diagram, the spin and two-particle charge excitations gaps, and the entanglement properties of the model with high precision. Performing a bosonization analysis we can derive a field description of the transition region in terms of a triple sine-Gordon model. This allows us to derive field theory predictions for the power-law (exponential) decay of the density-density (spin-spin) and bond-order-wave correlation functions, which are found to be in excellent agreement with our numerical results. This work was supported by Deutsche Forschungsgemeinschaft (Germany), SFB 652, project B5, and by the EPSRC under Grant No. EP/N01930X/1 (FHLE).

  9. Quantitative Computerized Two-Point Correlation Analysis of Lung CT Scans Correlates With Pulmonary Function in Pulmonary Sarcoidosis

    PubMed Central

    Erdal, Barbaros Selnur; Yildiz, Vedat; King, Mark A.; Patterson, Andrew T.; Knopp, Michael V.; Clymer, Bradley D.

    2012-01-01

    Background: Chest CT scans are commonly used to clinically assess disease severity in patients presenting with pulmonary sarcoidosis. Despite their ability to reliably detect subtle changes in lung disease, the utility of chest CT scans for guiding therapy is limited by the fact that image interpretation by radiologists is qualitative and highly variable. We sought to create a computerized CT image analysis tool that would provide quantitative and clinically relevant information. Methods: We established that a two-point correlation analysis approach reduced the background signal attendant to normal lung structures, such as blood vessels, airways, and lymphatics while highlighting diseased tissue. This approach was applied to multiple lung fields to generate an overall lung texture score (LTS) representing the quantity of diseased lung parenchyma. Using deidentified lung CT scan and pulmonary function test (PFT) data from The Ohio State University Medical Center’s Information Warehouse, we analyzed 71 consecutive CT scans from patients with sarcoidosis for whom simultaneous matching PFTs were available to determine whether the LTS correlated with standard PFT results. Results: We found a high correlation between LTS and FVC, total lung capacity, and diffusing capacity of the lung for carbon monoxide (P < .0001 for all comparisons). Moreover, LTS was equivalent to PFTs for the detection of active lung disease. The image analysis protocol was conducted quickly (< 1 min per study) on a standard laptop computer connected to a publicly available National Institutes of Health ImageJ toolkit. Conclusions: The two-point image analysis tool is highly practical and appears to reliably assess lung disease severity. We predict that this tool will be useful for clinical and research applications. PMID:22628487

  10. Strong correlations in gravity and biophysics

    NASA Astrophysics Data System (ADS)

    Krotov, Dmitry

    The unifying theme of this dissertation is the use of correlations. In the first part (chapter 2), we investigate correlations in quantum field theories in de Sitter space. In the second part (chapters 3,4,5), we use correlations to investigate a theoretical proposal that real (observed in nature) transcriptional networks of biological organisms are operating at a critical point in their phase diagram. In chapter 2 we study the infrared dependence of correlators in various external backgrounds. Using the Schwinger-Keldysh formalism we calculate loop corrections to the correlators in the case of the Poincare patch and the complete de Sitter space. In the case of the Poincare patch, the loop correction modifies the behavior of the correlator at large distances. In the case of the complete de Sitter space, the loop correction has a strong dependence on the infrared cutoff in the past. It grows linearly with time, suggesting that at some point the correlations become strong and break the symmetry of the classical background. In chapter 3 we derive the signatures of critical behavior in a model organism, the embryo of Drosophila melanogaster. They are: strong correlations in the fluctuations of different genes, a slowing of dynamics, long range correlations in space, and departures from a Gaussian distribution of these fluctuations. We argue that these signatures are observed experimentally. In chapter 4 we construct an effective theory for the zero mode in this system. This theory is different from the standard Landau-Ginsburg description. It contains gauge fields (the result of the broken translational symmetry inside the cell), which produce observable contributions to the two-point function of the order parameter. We show that the behavior of the two-point function for the network of N genes is described by the action of a relativistic particle moving on the surface of the N - 1 dimensional sphere. We derive a theoretical bound on the decay of the correlations and compare it with experimental data. How difficult is it to tune a network to criticality? In chapter 5 we construct the space of all possible networks within a simple thermodynamic model of biological enhancers. We demonstrate that there is a reasonable number of models within this framework that accurately capture the mean expression profiles of the gap genes that are observed experimentally.

  11. Prevalence and Correlates of Sipping Alcohol in a Prospective Middle School Sample

    PubMed Central

    Jackson, Kristina M.; Colby, Suzanne M.; Barnett, Nancy P.; Abar, Caitlin C.

    2015-01-01

    Research documents an association between early use of alcohol and adverse outcomes. Most studies on drinking initiation exclude sipping or confound sips with consumption of a full drink. Yet, even a few sips of alcohol can constitute a meaningful experience for naïve drinkers. Prior research with this project indicated that sipping prior to middle school predicted subsequent adverse outcomes (at high-school entry), even controlling for child externalizing and sensation seeking and parent alcohol use. The present study extends our prior work by examining the correlates of early sipping and sipping onset. The sample was comprised of 1,023 6th, 7th, and 8th graders (52% female; 24% non-White, 12% Hispanic). Participants completed web-based surveys on five occasions over the course of two years. The prevalence of sipping at Wave 1 was 37%, with 29% of never-sippers initiating sipping within two years. Sipping was associated with stronger alcohol-related cognitions and low school engagement as well as contextual influences in the peer, sibling, and parent domains. Sipping onset among never-sippers was prospectively predicted by sensation seeking and problem behavior as well as parental and sibling influences. Importantly, mere availability of alcohol was a strong correlate both concurrently and prospectively. Further analyses demonstrated that youth who sipped alcohol with parental permission had a lower profile of risk and healthier relationships with parents as compared to youth who reported unsanctioned sipping. Findings point to the importance of considering fine-grained early drinking behavior and call for further attention to sipping in research on initiation of alcohol use. PMID:25938631

  12. Charged fixed point in the Ginzburg-Landau superconductor and the role of the Ginzburg parameter /κ

    NASA Astrophysics Data System (ADS)

    Kleinert, Hagen; Nogueira, Flavio S.

    2003-02-01

    We present a semi-perturbative approach which yields an infrared-stable fixed point in the Ginzburg-Landau for N=2, where N/2 is the number of complex components. The calculations are done in d=3 dimensions and below Tc, where the renormalization group functions can be expressed directly as functions of the Ginzburg parameter κ which is the ratio between the two fundamental scales of the problem, the penetration depth λ and the correlation length ξ. We find a charged fixed point for κ>1/ 2, that is, in the type II regime, where Δκ≡κ-1/ 2 is shown to be a natural expansion parameter. This parameter controls a momentum space instability in the two-point correlation function of the order field. This instability appears at a non-zero wave-vector p0 whose magnitude scales like ˜ Δκ β¯, with a critical exponent β¯=1/2 in the one-loop approximation, a behavior known from magnetic systems with a Lifshitz point in the phase diagram. This momentum space instability is argued to be the origin of the negative η-exponent of the order field.

  13. Competition of mesoscales and crossover to theta-point tricriticality in near-critical polymer solutions.

    PubMed

    Anisimov, M A; Kostko, A F; Sengers, J V; Yudin, I K

    2005-10-22

    The approach to asymptotic critical behavior in polymer solutions is governed by a competition between the correlation length of critical fluctuations diverging at the critical point of phase separation and an additional mesoscopic length scale, the radius of gyration. In this paper we present a theory for crossover between two universal regimes: a regime with Ising (fluctuation-induced) asymptotic critical behavior, where the correlation length prevails, and a mean-field tricritical regime with theta-point behavior controlled by the mesoscopic polymer chain. The theory yields a universal scaled description of existing experimental phase-equilibria data and is in excellent agreement with our light-scattering experiments on polystyrene solutions in cyclohexane with polymer molecular weights ranging from 2 x 10(5) up to 11.4 x 10(6). The experiments demonstrate unambiguously that crossover to theta-point tricriticality is controlled by a competition of the two mesoscales. The critical amplitudes deduced from our experiments depend on the polymer molecular weight as predicted by de Gennes [Phys. Lett. 26A, 313 (1968)]. Experimental evidence for the presence of logarithmic corrections to mean-field tricritical theta-point behavior in the molecular-weight dependence of the critical parameters is also presented.

  14. Measuring contemporary crustal motions; NASA’s Crustal Dynamics Project

    USGS Publications Warehouse

    Frey, H. V.; Bosworth, J. M.

    1988-01-01

    In this article we describe briefly the two space geodetic techniques and how they are used by the Crustal Dynamics Project, show some of the very exciting results that have emerged at the halfway point in the project's life, describe the availability and utilization of the data being collected, and consider what the future may hold when measurement accuracies eventually exceed even those now available and when other international groups become more heavily involved.   

  15. Asymptotic behaviour of two-point functions in multi-species models

    NASA Astrophysics Data System (ADS)

    Kozlowski, Karol K.; Ragoucy, Eric

    2016-05-01

    We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU (3)-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  16. Why Psychology Cannot be an Empirical Science.

    PubMed

    Smedslund, Jan

    2016-06-01

    The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project.

  17. Experimental results for correlation-based wavefront sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, L A; Palmer, D W; LaFortune, K N

    2005-07-01

    Correlation wave-front sensing can improve Adaptive Optics (AO) system performance in two keys areas. For point-source-based AO systems, Correlation is more accurate, more robust to changing conditions and provides lower noise than a centroiding algorithm. Experimental results from the Lick AO system and the SSHCL laser AO system confirm this. For remote imaging, Correlation enables the use of extended objects for wave-front sensing. Results from short horizontal-path experiments will show algorithm properties and requirements.

  18. Non-invasive evaluation of stable renal allograft function using point shear-wave elastography.

    PubMed

    Kim, Bom Jun; Kim, Chan Kyo; Park, Jung Jae

    2018-01-01

    To investigate the feasibility of point shear-wave elastography (SWE) in evaluating patients with stable renal allograft function who underwent protocol biopsies. 95 patients with stable renal allograft function that underwent ultrasound-guided biopsies at predefined time points (10 days or 1 year after transplantation) were enrolled. Ultrasound and point SWE examinations were performed immediately before protocol biopsies. Patients were categorized into two groups: subclinical rejection (SCR) and non-SCR. Tissue elasticity (kPa) on SWE was measured in the cortex of all renal allografts. SCR was pathologically confirmed in 34 patients. Tissue elasticity of the SCR group (31.0 kPa) was significantly greater than that of the non-SCR group (24.5 kPa) (=0.016), while resistive index value did not show a significant difference between the two groups (p = 0.112). Tissue elasticity in renal allografts demonstrated significantly moderate negative correlation with estimated glomerular filtration rate (correlation coefficient = -0.604, p < 0.001). Tissue elasticity was not independent factor for SCR prediction on multivariate analysis. As a non-invasive tool, point SWE appears feasible in distinguishing between patients with SCR and without SCR in stable functioning renal allografts. Moreover, it may demonstrate the functional state of renal allografts. Advances in knowledge: On point SWE, SCR has greater tissue elasticity than non-SCR.

  19. Correlation and Stacking of Relative Paleointensity and Oxygen Isotope Data

    NASA Astrophysics Data System (ADS)

    Lurcock, P. C.; Channell, J. E.; Lee, D.

    2012-12-01

    The transformation of a depth-series into a time-series is routinely implemented in the geological sciences. This transformation often involves correlation of a depth-series to an astronomically calibrated time-series. Eyeball tie-points with linear interpolation are still regularly used, although these have the disadvantages of being non-repeatable and not based on firm correlation criteria. Two automated correlation methods are compared: the simulated annealing algorithm (Huybers and Wunsch, 2004) and the Match protocol (Lisiecki and Lisiecki, 2002). Simulated annealing seeks to minimize energy (cross-correlation) as "temperature" is slowly decreased. The Match protocol divides records into intervals, applies penalty functions that constrain accumulation rates, and minimizes the sum of the squares of the differences between two series while maintaining the data sequence in each series. Paired relative paleointensity (RPI) and oxygen isotope records, such as those from IODP Site U1308 and/or reference stacks such as LR04 and PISO, are warped using known warping functions, and then the un-warped and warped time-series are correlated to evaluate the efficiency of the correlation methods. Correlations are performed in tandem to simultaneously optimize RPI and oxygen isotope data. Noise spectra are introduced at differing levels to determine correlation efficiency as noise levels change. A third potential method, known as dynamic time warping, involves minimizing the sum of distances between correlated point pairs across the whole series. A "cost matrix" between the two series is analyzed to find a least-cost path through the matrix. This least-cost path is used to nonlinearly map the time/depth of one record onto the depth/time of another. Dynamic time warping can be expanded to more than two dimensions and used to stack multiple time-series. This procedure can improve on arithmetic stacks, which often lose coherent high-frequency content during the stacking process.

  20. Recursive Techniques for Computing Gluon Scattering in Anti-de-Sitter Space

    NASA Astrophysics Data System (ADS)

    Shyaka, Claude; Kharel, Savan

    2016-03-01

    The anti-de Sitter/conformal field theory correspondence is a relationship between two kinds of physical theories. On one side of the duality are special type of quantum (conformal) field theories known as the Yang-Mills theory. These quantum field theories are known to be equivalent to theories of gravity in Anti-de Sitter (AdS) space. The physical observables in the theory are the correlation functions that live in the boundary of AdS space. In general correlation functions are computed using configuration space and the expressions are extremely complicated. Using momentum basis and recursive techniques developed by Raju, we extend tree level correlation functions for four and five-point correlation functions in Yang-Mills theory in Anti-de Sitter space. In addition, we show that for certain external helicity, the correlation functions have simple analytic structure. Finally, we discuss how one can generalize these results to n-point functions. Hendrix college odyssey Grant.

  1. 7 CFR 1980.451 - Filing and processing applications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... categories, and their point scores, are: (A) Project will contribute to the overall economic stability of the... points). (B) Project will contribute to the overall economic stability of the project area and will...'s economy (20 points). (C) Project will contribute to the overall economic stability of the project...

  2. A novel infrared small moving target detection method based on tracking interest points under complicated background

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying

    2014-07-01

    Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.

  3. Patterns of Student Growth in Reasoning about Multivariate Correlational Problems.

    ERIC Educational Resources Information Center

    Ross, John A.; Cousins, J. Bradley

    Previous studies of the development of correlational reasoning have focused on the interpretation of relatively simple data sets contained in 2 X 2 tables. In contrast, this study examined age trends in subjects' responses to problems involving more than two continuous variables. The research is part of a multi-year project to conceptualize…

  4. Satellite Power Systems (SPS) concept definition study. Volume 2: SPS system requirements

    NASA Technical Reports Server (NTRS)

    Hanley, G.

    1978-01-01

    Collected data reflected the level of definition resulting from the evaluation of a broad spectrum of SPS (satellite power systems) concepts. As the various concepts matured, these requirements were updated to reflect the requirements identified for the projected satellite system/subsystem point design(s). The study established several candidate concepts which were presented to provide a basis for the selection of one or two approaches that would be given a more comprehensive examination. The two selected concepts were expanded and constitute the selected system point designs. The identified system/subsystem requirements was emphasized and information on the selected point design was provided.

  5. Interim Reflections on the Corporate University and SME Academy Business Development Innovation and Its Diffusion

    ERIC Educational Resources Information Center

    Dealtry, Richard

    2008-01-01

    Purpose: The purpose of this paper is to reflect on and inform about learning points from ECUANET, a two-year duration best practice action research and transnational networking project as it approaches its final stage. Design/methodology/approach: The paper explicates the key positive and obfuscating dynamics that the project team have had to,…

  6. Comment on “Band gaps structure and semi-Dirac point of two-dimensional function photonic crystals” by Si-Qi Zhang et al.

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Feng

    2018-01-01

    Not Available Project supported by the Special Grade of the Financial Support from the China Postdoctoral Science Foundation (Grant No. 2016T90455), the China Postdoctoral Science Foundation (Grant No. 2015M581790), and the Chinese Jiangsu Planned Projects for Postdoctoral Research Funds, China (Grant No. 1501016A).

  7. The Hunters Point-Bayview SEED Project: A Diagnostic Review of Reading Achievement in the First Three Grades.

    ERIC Educational Resources Information Center

    Counelis, James Steve

    A diagnostic review of reading achievement in the first three grades of the South East Education Development (SEED) project is presented. Comparisons are made with the 1969-1970 SEED data, which is considered baseline. The findings indicated that: (1) no significant difference existed in the pooled attendance for each grade between two successive…

  8. Meta-heuristic algorithm to solve two-sided assembly line balancing problems

    NASA Astrophysics Data System (ADS)

    Wirawan, A. D.; Maruf, A.

    2016-02-01

    Two-sided assembly line is a set of sequential workstations where task operations can be performed at two sides of the line. This type of line is commonly used for the assembly of large-sized products: cars, buses, and trucks. This paper propose a Decoding Algorithm with Teaching-Learning Based Optimization (TLBO), a recently developed nature-inspired search method to solve the two-sided assembly line balancing problem (TALBP). The algorithm aims to minimize the number of mated-workstations for the given cycle time without violating the synchronization constraints. The correlation between the input parameters and the emergence point of objective function value is tested using scenarios generated by design of experiments. A two-sided assembly line operated in an Indonesia's multinational manufacturing company is considered as the object of this paper. The result of the proposed algorithm shows reduction of workstations and indicates that there is negative correlation between the emergence point of objective function value and the size of population used.

  9. Applications of multiscale change point detections to monthly stream flow and rainfall in Xijiang River in southern China, part I: correlation and variance

    NASA Astrophysics Data System (ADS)

    Zhu, Yuxiang; Jiang, Jianmin; Huang, Changxing; Chen, Yongqin David; Zhang, Qiang

    2018-04-01

    This article, as part I, introduces three algorithms and applies them to both series of the monthly stream flow and rainfall in Xijiang River, southern China. The three algorithms include (1) normalization of probability distribution, (2) scanning U test for change points in correlation between two time series, and (3) scanning F-test for change points in variances. The normalization algorithm adopts the quantile method to normalize data from a non-normal into the normal probability distribution. The scanning U test and F-test have three common features: grafting the classical statistics onto the wavelet algorithm, adding corrections for independence into each statistic criteria at given confidence respectively, and being almost objective and automatic detection on multiscale time scales. In addition, the coherency analyses between two series are also carried out for changes in variance. The application results show that the changes of the monthly discharge are still controlled by natural precipitation variations in Xijiang's fluvial system. Human activities disturbed the ecological balance perhaps in certain content and in shorter spells but did not violate the natural relationships of correlation and variance changes so far.

  10. Applications of a Sequence of Points in Teaching Linear Algebra, Numerical Methods and Discrete Mathematics

    ERIC Educational Resources Information Center

    Shi, Yixun

    2009-01-01

    Based on a sequence of points and a particular linear transformation generalized from this sequence, two recent papers (E. Mauch and Y. Shi, "Using a sequence of number pairs as an example in teaching mathematics". Math. Comput. Educ., 39 (2005), pp. 198-205; Y. Shi, "Case study projects for college mathematics courses based on a particular…

  11. Comparisons of chewing rhythm, craniomandibular morphology, body mass and height between mothers and their biological daughters.

    PubMed

    Cho, Catherine; Louie, Ke'ale; Maawadh, Ahmed; Gerstner, Geoffrey E

    2015-11-01

    To study and compare the relationships between mean chewing cycle duration, selected cephalometric variables representing mandibular length, face height, etc., measured in women and in their teenage or young-adult biological daughters. Daughters were recruited from local high schools and the University of Michigan School of Dentistry. Selection criteria included healthy females with full dentition, 1st molar occlusion, no active orthodontics, no medical conditions nor medication use that could interfere with normal masticatory motor function. Mothers had to be biologically related to their daughters. All data were obtained in the School of Dentistry. Measurements obtained from lateral cephalograms included: two "jaw length" measures, condylion-gnathion and gonion-gnathion, and four measures of facial profile including lower anterior face height, and angles sella-nasion-A point (SNA), sella-nasion-B point (SNB) and A point-nasion-B point (ANB). Mean cycle duration was calculated from 60 continuous chewing cycles, where a cycle was defined as the time between two successive maximum jaw openings in the vertical dimension. Other variables included subject height and weight. Linear and logistic regression analyses were used to evaluate the mother-daughter relationships and to study the relationships between cephalometric variables and chewing cycle duration. Height, weight, Co-Gn and Go-Gn were significantly correlated between mother-daughter pairs; however, mean cycle duration was not (r(2)=0.015). Mean cycle duration was positively correlated with ANB and height in mothers, but negatively correlated with Co-Gn in daughters. Chewing rate is not correlated between mothers and daughters in humans. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Thermal comfort in naturally ventilated buildings in Maceio, Brazil

    NASA Astrophysics Data System (ADS)

    Djamila, Harimi

    2017-11-01

    This article presents the results from thermal comfort survey carried out in classrooms over two different seasons in Maceio, Brazil. The secondary data were collected from thermal comfort field study conducted in naturally ventilated classrooms. Objective and subjective parameters were explored to evaluate thermal comfort conditions. The potential effect of air movement on subjects' vote under neutrality was evaluated. Overall, the indoor climate of the surveyed location was classified warm and humid. Conflicting results were depicted when analyzing the effect of air movements on subjects' vote. The mean air temperature for subjects feeling hot was found to be lower than those feeling warm. A reasonable approach to tackle these two unpredictable results was suggested. Correlation matrix between selected thermal comfort variables was developed. Globe temperature recorded the highest correlation with subjects' response on ASHRAE seven-point scale. The correlation was significant at the 0.01 level. On the other hand, the correlation between air movement and subjects' response on ASHRAE seven-point scale was weak but significant. Further field studies on the current topic were recommended.

  13. Two Point Space-Time Correlation of Density Fluctuations Measured in High Velocity Free Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta

    2006-01-01

    Two-point space-time correlations of air density fluctuations in unheated, fully-expanded free jets at Mach numbers M(sub j) = 0.95, 1.4, and 1.8 were measured using a Rayleigh scattering based diagnostic technique. The molecular scattered light from two small probe volumes of 1.03 mm length was measured for a completely non-intrusive means of determining the turbulent density fluctuations. The time series of density fluctuations were analyzed to estimate the integral length scale L in a moving frame of reference and the convective Mach number M(sub c) at different narrow Strouhal frequency (St) bands. It was observed that M(sub c) and the normalized moving frame length scale L*St/D, where D is the jet diameter, increased with Strouhal frequency before leveling off at the highest resolved frequency. Significant differences were observed between data obtained from the lip shear layer and the centerline of the jet. The wave number frequency transform of the correlation data demonstrated progressive increase in the radiative part of turbulence fluctuations with increasing jet Mach number.

  14. Microstructural Quantification, Property Prediction, and Stochastic Reconstruction of Heterogeneous Materials Using Limited X-Ray Tomography Data

    NASA Astrophysics Data System (ADS)

    Li, Hechao

    An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.

  15. Instanton effects on CP-violating gluonic correlators

    NASA Astrophysics Data System (ADS)

    Mori, Shingo; Frison, Julien; Kitano, Ryuichiro; Matsufuru, Hideo; Yamada, Norikazu

    2018-03-01

    In order to better understand the role played by instantons behind nonperturbative dynamics, we investigate the instanton contributions to the gluonic two point correlation functions in the SU(2) YM theory. Pseudoscalar-scalar gluonic correlation functions are calculated on the lattice at various temperatures and compared with the instanton calculus. We discuss how the instanton effects emerge or disappear with temperature and try to provide the interpretation behind it.

  16. Large-scale structure of randomly jammed spheres

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio

    2017-05-01

    We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.

  17. Northrop Grumman TR202 LOX/LH2 Deep Throttling Engine Technology Project Status

    NASA Technical Reports Server (NTRS)

    Gromski, Jason; Majamaki, Annik; Chianese, Silvio; Weinstock, Vladimir; Kim, Tony S.

    2010-01-01

    NASA's Propulsion and Cryogenic Advanced Development (PCAD) project is currently developing enabling propulsion technologies in support of future lander missions. To meet lander requirements, several technical challenges need to be overcome, one of which is the ability for the descent engine(s) to operate over a deep throttle range with cryogenic propellants. To address this need, PCAD has enlisted Northrop Grumman Aerospace Systems (NGAS) in a technology development effort associated with the TR202 engine. The TR202 is a LOX/LH2 expander cycle engine driven by independent turbopump assemblies and featuring a variable area pintle injector similar to the injector used on the TR200 Apollo Lunar Module Descent Engine (LMDE). Since the Apollo missions, NGAS has continued to mature deep throttling pintle injector technology. The TR202 program has completed two series of pintle injector testing. The first series of testing used ablative thrust chambers and demonstrated igniter operation as well as stable performance at discrete points throughout the designed 10:1 throttle range. The second series was conducted with calorimeter chambers and demonstrated injector performance at discrete points throughout the throttle range as well as chamber heat flow adequate to power an expander cycle design across the throttle range. This paper provides an overview of the TR202 program, describing the different phases and key milestones. It describes how test data was correlated to the engine conceptual design. The test data obtained has created a valuable database for deep throttling cryogenic pintle technology, a technology that is readily scalable in thrust level.

  18. 100-point scale evaluating job satisfaction and the results of the 12-item General Health Questionnaire in occupational workers.

    PubMed

    Kawada, Tomoyuki; Yamada, Natsuki

    2012-01-01

    Job satisfaction is an important factor in the occupational lives of workers. In this study, the relationship between one-dimensional scale of job satisfaction and psychological wellbeing was evaluated. A total of 1,742 workers (1,191 men and 551 women) participated. 100-point scale evaluating job satisfaction (0 [extremely dissatisfied] to 100 [extremely satisfied]) and the General Health Questionnaire, 12-item version (GHQ-12) evaluating psychological wellbeing were used. A multiple regression analysis was then used, controlling for gender and age. The change in the GHQ-12 and job satisfaction scores after a two-year interval was also evaluated. The mean age for the subjects was 42.2 years for the men and 36.2 years for the women. The GHQ-12 and job satisfaction scores were significantly correlated in each generation. The partial correlation coefficients between the changes in the two variables, controlling for age, were -0.395 for men and -0.435 for women (p< 0.001). A multiple regression analysis revealed that the 100-point job satisfaction score was associated with the GHQ-12 results (p< 0.001). The adjusted multiple correlation coefficient was 0.275. The 100-point scale, which is a simple and easy tool for evaluating job satisfaction, was significantly associated with psychological wellbeing as judged using the GHQ-12.

  19. Comparing global-scale topographic and climatic metrics to long-term erosion rates using ArcSwath, an efficient new ArcGIS tool for swath profile analysis

    NASA Astrophysics Data System (ADS)

    Blomqvist, Niclas; Whipp, David

    2016-04-01

    The topography of the Earth's surface is the result of the interaction of tectonics, erosion and climate. Thus, topography should contain a record of these processes that can be extracted by topographic analysis. The question considered in this study is whether the spatial variations in erosion that have sculpted the modern topography are representative of the long-term erosion rates in mountainous regions. We compare long-term erosion rates derived from low-temperature thermochronometry to erosional proxies calculated from topographic and climatic data analysis. The study has been performed on a global scale including six orogens: The Himalaya, Andes, Taiwan, Olympic Mountains, Southern Alps in New Zealand and European Alps. The data was analyzed using a new swath profile analysis tool for ArcGIS called ArcSwath (https://github.com/HUGG/ArcSwath) to determine the correlations between the long-term erosion rates and modern elevations, slope angles, relief in 2.5-km- and 5-km-diameter circles, erosion potential, normalized channel steepness index ksn, and annual rainfall. ArcSwath uses a Python script that has been incorporated into an ArcMap 10.2 add-in tool, extracting swath profiles in about ten seconds compared to earlier workflows that could take more than an hour. In ArcMap, UTM-projected point or raster files can be used for creating swath profiles. Point data are projected onto the swath and the statistical parameters (minimum, mean and maximum of the values across the swath) are calculated for the raster data. Both can be immediately plotted using the Python matplotlib library, or plotted externally using the csv-file that is produced by ArcSwath. When raster and point data are plotted together, it is easier to make comparisons and see correlations between the selected data. An unambiguous correlation between the topographic or climatic metrics and long-term erosion rates was not found. Fitting of linear regression lines to the topographic/ climatic metric data and the long-term erosion rates shows that 86 of 288 plots (30%) have "good" R2 values (> 0.35) and 135 of 288 (47%) have an "acceptable" R2 value (> 0.2). The "good" and "acceptable" values have been selected on the basis of visual fit to the regression line. The majority of the plots with a "good" correlation value have positive correlations, while 11/86 plots have negative slopes for the regression lines. Interestingly, two topographic profile shapes were clear in swath profiles: Concave-up (e.g., the central-western Himalaya and the northern Bolivian Andes) and concave-down or straight (e.g., the eastern Himalayas and the southern Bolivian Andes). On the orogen scale, the concave-up shape is often related to relatively high precipitation and erosion rates on the slopes of steep topography. The concave-down/straight profiles seem to occur in association of low rainfall and/or erosion rates. Though we cannot say with confidence, the lack of a clear correlation between long-term erosion rates and climate or topography may be due to the difference in their respective timescales as climate can vary over shorter timescales than 105-107 years. In that case, variations between fluvial and glacial erosion may have overprinted the erosional effects of one another.

  20. Galaxy clustering dependence on the [O II] emission line luminosity in the local Universe

    NASA Astrophysics Data System (ADS)

    Favole, Ginevra; Rodríguez-Torres, Sergio A.; Comparat, Johan; Prada, Francisco; Guo, Hong; Klypin, Anatoly; Montero-Dorta, Antonio D.

    2017-11-01

    We study the galaxy clustering dependence on the [O II] emission line luminosity in the SDSS DR7 Main galaxy sample at mean redshift z ∼ 0.1. We select volume-limited samples of galaxies with different [O II] luminosity thresholds and measure their projected, monopole and quadrupole two-point correlation functions. We model these observations using the 1 h-1 Gpc MultiDark-Planck cosmological simulation and generate light cones with the SUrvey GenerAtoR algorithm. To interpret our results, we adopt a modified (Sub)Halo Abundance Matching scheme, accounting for the stellar mass incompleteness of the emission line galaxies. The satellite fraction constitutes an extra parameter in this model and allows to optimize the clustering fit on both small and intermediate scales (i.e. rp ≲ 30 h-1 Mpc), with no need of any velocity bias correction. We find that, in the local Universe, the [O II] luminosity correlates with all the clustering statistics explored and with the galaxy bias. This latter quantity correlates more strongly with the SDSS r-band magnitude than [O II] luminosity. In conclusion, we propose a straightforward method to produce reliable clustering models, entirely built on the simulation products, which provides robust predictions of the typical ELG host halo masses and satellite fraction values. The SDSS galaxy data, MultiDark mock catalogues and clustering results are made publicly available.

  1. Jet Aeroacoustics: Noise Generation Mechanism and Prediction

    NASA Technical Reports Server (NTRS)

    Tam, Christopher

    1998-01-01

    This report covers the third year research effort of the project. The research work focussed on the fine scale mixing noise of both subsonic and supersonic jets and the effects of nozzle geometry and tabs on subsonic jet noise. In publication 1, a new semi-empirical theory of jet mixing noise from fine scale turbulence is developed. By an analogy to gas kinetic theory, it is shown that the source of noise is related to the time fluctuations of the turbulence kinetic theory. On starting with the Reynolds Averaged Navier-Stokes equations, a formula for the radiated noise is derived. An empirical model of the space-time correlation function of the turbulence kinetic energy is adopted. The form of the model is in good agreement with the space-time two-point velocity correlation function measured by Davies and coworkers. The parameters of the correlation are related to the parameters of the k-epsilon turbulence model. Thus the theory is self-contained. Extensive comparisons between the computed noise spectrum of the theory and experimental measured have been carried out. The parameters include jet Mach number from 0.3 to 2.0 and temperature ratio from 1.0 to 4.8. Excellent agreements are found in the spectrum shape, noise intensity and directivity. It is envisaged that the theory would supercede all semi-empirical and totally empirical jet noise prediction methods in current use.

  2. 76 FR 44309 - Applications for New Awards; Charter Schools Program Grants to Non-State Educational Agencies for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-25

    ... application meets Competitive Preference Priority 1, up to an additional two points to an application depending on how well the application meets Competitive Preference Priority 2, and up to an additional two...: Applicants approved for funding under this competition must attend an in-person, two-day meeting for project...

  3. Developing Autobiographical Accounts as a Starting Point in Research

    ERIC Educational Resources Information Center

    Sancho, Juana M.; Hernández-Hernández, Fernando

    2013-01-01

    Teaching and research are an academic's two main responsibilities. The performance of these two roles (teacher and researcher) can be clearly separated or noticeably interwoven in a continuous reflective process that shares and interchanges positionalities and references. Research projects, in the context of the quality research group…

  4. The Impact of Sustainability on Global Trade: A Cross-Curricular Project

    ERIC Educational Resources Information Center

    Weber, Curt M.; Roy, Sharon

    2010-01-01

    One of the challenges in higher education is leading students in the application of information from one course to learning material in subsequent coursework. The authors have devised a joint project for courses in Logistics and Administrative Law to assist students in correlation of material in courses of two business majors, with emphasis on…

  5. High performance multichannel photonic biochip sensors for future point of care diagnostics: an overview on two EU-sponsored projects

    NASA Astrophysics Data System (ADS)

    Giannone, Domenico; Kazmierczak, Andrzej; Dortu, Fabian; Vivien, Laurent; Sohlström, Hans

    2010-04-01

    We present here research work on two optical biosensors which have been developed within two separate European projects (6th and 7th EU Framework Programmes). The biosensors are based on the idea of a disposable biochip, integrating photonics and microfluidics, optically interrogated by a multichannel interrogation platform. The objective is to develop versatile tools, suitable for performing screening tests at Point of Care or for example, at schools or in the field. The two projects explore different options in terms of optical design and different materials. While SABIO used Si3N4/SiO2 ring resonators structures, P3SENS aims at the use of photonic crystal devices based on polymers, potentially a much more economical option. We discuss both approaches to show how they enable high sensitivity and multiple channel detection. The medium term objective is to develop a new detection system that has low cost and is portable but at the same time offering high sensitivity, selectivity and multiparametric detection from a sample containing various components (e.g. blood, serum, saliva, etc.). Most biological sensing devices already present on the market suffer from limitations in multichannel operation capability (either the detection of multiple analytes indicating a given pathology or the simultaneous detection of multiple pathologies). In other words, the number of different analytes that can be detected on a single chip is very limited. This limitation is a main issue addressed by the two projects. The excessive cost per test of conventional bio sensing devices is a second issue that is addressed.

  6. Spectral determinants for twist field correlators

    NASA Astrophysics Data System (ADS)

    Belitsky, A. V.

    2018-04-01

    Twist fields were introduced a few decades ago as a quantum counterpart to classical kink configurations and disorder variables in low dimensional field theories. In recent years they received a new incarnation within the framework of geometric entropy and strong coupling limit of four-dimensional scattering amplitudes. In this paper, we study their two-point correlation functions in a free massless scalar theory, namely, twist-twist and twist-antitwist correlators. In spite of the simplicity of the model in question, the properties of the latter are far from being trivial. The problem is reduced, within the formalism of the path integral, to the study of spectral determinants on surfaces with conical points, which are then computed exactly making use of the zeta function regularization. We also provide an insight into twist correlators for a massive complex scalar by means of the Lifshitz-Krein trace formula.

  7. Estimation of the displacements among distant events based on parallel tracking of events in seismic traces under uncertainty

    NASA Astrophysics Data System (ADS)

    Huamán Bustamante, Samuel G.; Cavalcanti Pacheco, Marco A.; Lazo Lazo, Juan G.

    2018-07-01

    The method we propose in this paper seeks to estimate interface displacements among strata related with reflection seismic events, in comparison to the interfaces at other reference points. To do so, we search for reflection events in the reference point of a second seismic trace taken from the same 3D survey and close to a well. However, the nature of the seismic data introduces uncertainty in the results. Therefore, we perform an uncertainty analysis using the standard deviation results from several experiments with cross-correlation of signals. To estimate the displacements of events in depth between two seismic traces, we create a synthetic seismic trace with an empirical wavelet and the sonic log of the well, close to the second seismic trace. Then, we relate the events of the seismic traces to the depth of the sonic log. Finally, we test the method with data from the Namorado Field in Brazil. The results show that the accuracy of the event estimated depth depends on the results of parallel cross-correlation, primarily those from the procedures used in the integration of seismic data with data from the well. The proposed approach can correctly identify several similar events in two seismic traces without requiring all seismic traces between two distant points of interest to correlate strata in the subsurface.

  8. Exact relations for energy transfer in self-gravitating isothermal turbulence

    NASA Astrophysics Data System (ADS)

    Banerjee, Supratik; Kritsuk, Alexei G.

    2017-11-01

    Self-gravitating isothermal supersonic turbulence is analyzed in the asymptotic limit of large Reynolds numbers. Based on the inviscid invariance of total energy, an exact relation is derived for homogeneous (not necessarily isotropic) turbulence. A modified definition for the two-point energy correlation functions is used to comply with the requirement of detailed energy equipartition in the acoustic limit. In contrast to the previous relations (S. Galtier and S. Banerjee, Phys. Rev. Lett. 107, 134501 (2011), 10.1103/PhysRevLett.107.134501; S. Banerjee and S. Galtier, Phys. Rev. E 87, 013019 (2013), 10.1103/PhysRevE.87.013019), the current exact relation shows that the pressure dilatation terms play practically no role in the energy cascade. Both the flux and source terms are written in terms of two-point differences. Sources enter the relation in a form of mixed second-order structure functions. Unlike the kinetic and thermodynamic potential energies, the gravitational contribution is absent from the flux term. An estimate shows that, for the isotropic case, the correlation between density and gravitational acceleration may play an important role in modifying the energy transfer in self-gravitating turbulence. The exact relation is also written in an alternative form in terms of two-point correlation functions, which is then used to describe scale-by-scale energy budget in spectral space.

  9. Nuclear Magnetic Resonance of Polymeric Materials: Proceedings of the Autumn Meeting of the British Radiofrequency Group Held at Dublin (Ireland).

    DTIC Science & Technology

    1983-01-01

    are ignored, from the formula i,;k i/s&-) - A.(S’o e T() (2.28) ( = . 4,-p ) L) C(_ (one point function has S S 2 two body correlation integrates over s...rigid solid limit since the contributions of the first two integrals of equation (5) cancel in this case. However, for correlation times Tc - T1 4no...expression for TID for a distribution of correlation times in the same manner as we did previously for T and using the activation parameters previously

  10. Quality assessment of expert answers to lay questions about cystic fibrosis from various language zones in Europe: the ECORN-CF project.

    PubMed

    d'Alquen, Daniela; De Boeck, Kris; Bradley, Judy; Vávrová, Věra; Dembski, Birgit; Wagner, Thomas O F; Pfalz, Annette; Hebestreit, Helge

    2012-02-06

    The European Centres of Reference Network for Cystic Fibrosis (ECORN-CF) established an Internet forum which provides the opportunity for CF patients and other interested people to ask experts questions about CF in their mother language. The objectives of this study were to: 1) develop a detailed quality assessment tool to analyze quality of expert answers, 2) evaluate the intra- and inter-rater agreement of this tool, and 3) explore changes in the quality of expert answers over the time frame of the project. The quality assessment tool was developed by an expert panel. Five experts within the ECORN-CF project used the quality assessment tool to analyze the quality of 108 expert answers published on ECORN-CF from six language zones. 25 expert answers were scored at two time points, one year apart. Quality of answers was also assessed at an early and later period of the project. Individual rater scores and group mean scores were analyzed for each expert answer. A scoring system and training manual were developed analyzing two quality categories of answers: content and formal quality. For content quality, the grades based on group mean scores for all raters showed substantial agreement between two time points, however this was not the case for the grades based on individual rater scores. For formal quality the grades based on group mean scores showed only slight agreement between two time points and there was also poor agreement between time points for the individual grades. The inter-rater agreement for content quality was fair (mean kappa value 0.232 ± 0.036, p < 0.001) while only slight agreement was observed for the grades of the formal quality (mean kappa value 0.105 ± 0.024, p < 0.001). The quality of expert answers was rated high (four language zones) or satisfactory (two language zones) and did not change over time. The quality assessment tool described in this study was feasible and reliable when content quality was assessed by a group of raters. Within ECORN-CF, the tool will help ensure that CF patients all over Europe have equal possibility of access to high quality expert advice on their illness. © 2012 d’Alquen et al; licensee BioMed Central Ltd.

  11. On the Decay of Correlations in Non-Analytic SO(n)-Symmetric Models

    NASA Astrophysics Data System (ADS)

    Naddaf, Ali

    We extend the method of complex translations which was originally employed by McBryan-Spencer [2] to obtain a decay rate for the two point function in two-dimensional SO(n)-symmetric models with non-analytic Hamiltonians for $.

  12. The Revised Animal Preference Test: An Implicit Probe of Tendencies Toward Psychopathy.

    PubMed

    Penzel, Ian B; Bair, Jessica; Liu, Tianwei; Robinson, Michael D

    2018-05-01

    At least some forms of interpersonal violence could follow from a vision of the self as a fierce, dominant creature. This should be particularly true when psychopathic (more proactive, less reactive) tendencies are involved. Possible relations of this type were examined in two studies (total N = 278) in which college student samples were presented with a new, structured version of an old projective test typically used in psychotherapy contexts. Participants were presented with predator-prey animal pairs (e.g., lion-zebra) that were not explicitly labeled as such. For each pair, the person was asked to choose the animal that they would more prefer to be. Participants who desired to be predator animals more often, on this Revised Animal Preference Test (RAPT), tended toward psychopathy to a greater extent. In Study 1, such relations were manifest in terms of correlations with psychopathic traits and with an interpersonal style marked by hostile dominance. Further analyses, though, revealed that predator self-identifications were more strongly related to primary psychopathy than secondary psychopathy. Study 2 replicated the interpersonal style correlates of the RAPT. In addition, photographs were taken of the participants in the second study and these photographs were rated for apparent hostility and dominance. As hypothesized, participants who wanted to be predator animals to a greater extent also appeared more hostile and dominant in their nonverbal behaviors. These studies suggest that projective preferences can be assessed in a reliable manner through the use of standardizing procedures. Furthermore, the studies point to some of the motivational factors that may contribute to psychopathy and interpersonal violence.

  13. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    PubMed

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Determination of magnetic helicity in the solar wind and implications for cosmic ray propagation

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. M.; Goldstein, M. L.

    1981-01-01

    Magnetic helicity (Hm) is the mean value of the correlation between a turbulent magnetic field and the magnetic vector potential. A technique is described for determining Hm and its 'reduced' spectrum from the two point magnetic correlation matrix. The application of the derived formalism to solar wind magnetic fluctuations is discussed, taking into account cases for which only single point measurements are available. The application procedure employs the usual 'frozen in approximation' approach. The considered method is applied to an analysis of several periods of Voyager 2 interplanetary magnetometer data near 2.8 AU. During these periods the correlation length, or energy containing length, was found to be approximately 3 x 10 to the 11th cm

  15. Estimation of correlation functions by stochastic approximation.

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Wintz, P. A.

    1972-01-01

    Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.

  16. Correlation structures from soft and semi-hard components in p-p collisions at √s =200 GeV

    DOE PAGES

    Porter, R. J.; Trainor, T. A.

    2005-02-01

    We present preliminary two-particle correlations for unidentified hadrons in p-p collisions at √s =200 GeV. On two-particle transverse rapidity space y t Ⓧ y t two distinct regions of correlated pairs are observed: a peaked structure at low y t (P t ≤ 0.4 GeV/c) and a broad structure at higher y t , where the correlation is distributed as a 2D Gaussian centered at y t1 = y t2 ≃ 2.8 (p t1 , p t2 ≃ 1.2 GeV/c). We select those regions separately, projecting correlations onto momentum- difference variables (ηΔ, φΔ), and observe structures interpretable in the contextmore » of string and parton fragmentations from soft and semi-hard components of p-p collisions.« less

  17. Developing Multidimensional Likert Scales Using Item Factor Analysis: The Case of Four-Point Items

    ERIC Educational Resources Information Center

    Asún, Rodrigo A.; Rdz-Navarro, Karina; Alvarado, Jesús M.

    2016-01-01

    This study compares the performance of two approaches in analysing four-point Likert rating scales with a factorial model: the classical factor analysis (FA) and the item factor analysis (IFA). For FA, maximum likelihood and weighted least squares estimations using Pearson correlation matrices among items are compared. For IFA, diagonally weighted…

  18. On the divergences of inflationary superhorizon perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, K; Nurmi, S; Podolsky, D

    2008-04-15

    We discuss the infrared divergences that appear to plague cosmological perturbation theory. We show that, within the stochastic framework, they are regulated by eternal inflation so that the theory predicts finite fluctuations. Using the {Delta}N formalism to one loop, we demonstrate that the infrared modes can be absorbed into additive constants and the coefficients of the diagrammatic expansion for the connected parts of two-and three-point functions of the curvature perturbation. As a result, the use of any infrared cutoff below the scale of eternal inflation is permitted, provided that the background fields are appropriately redefined. The natural choice for themore » infrared cutoff would, of course, be the present horizon; other choices manifest themselves in the running of the correlators. We also demonstrate that it is possible to define observables that are renormalization-group-invariant. As an example, we derive a non-perturbative, infrared finite and renormalization point-independent relation between the two-point correlators of the curvature perturbation for the case of the free single field.« less

  19. Correlation functions of warped CFT

    NASA Astrophysics Data System (ADS)

    Song, Wei; Xu, Jianfei

    2018-04-01

    Warped conformal field theory (WCFT) is a two dimensional quantum field theory whose local symmetry algebra consists of a Virasoro algebra and a U(1) Kac-Moody algebra. In this paper, we study correlation functions for primary operators in WCFT. Similar to conformal symmetry, warped conformal symmetry is very constraining. The form of the two and three point functions are determined by the global warped conformal symmetry while the four point functions can be determined up to an arbitrary function of the cross ratio. The warped conformal bootstrap equation are constructed by formulating the notion of crossing symmetry. In the large central charge limit, four point functions can be decomposed into global warped conformal blocks, which can be solved exactly. Furthermore, we revisit the scattering problem in warped AdS spacetime (WAdS), and give a prescription on how to match the bulk result to a WCFT retarded Green's function. Our result is consistent with the conjectured holographic dualities between WCFT and WAdS.

  20. TH-A-18C-03: Noise Correlation in CBCT Projection Data and Its Application for Noise Reduction in Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ZHANG, H; Huang, J; Ma, J

    2014-06-15

    Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are about 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. Conclusion: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.« less

  1. Computer Simulation for Calculating the Second-Order Correlation Function of Classical and Quantum Light

    ERIC Educational Resources Information Center

    Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.

    2011-01-01

    We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…

  2. On the effects of surrogacy of energy dissipation in determining the intermittency exponent in fully developed turbulence

    NASA Astrophysics Data System (ADS)

    Cleve, J.; Greiner, M.; Sreenivasan, K. R.

    2003-03-01

    The two-point correlation function of the energy dissipation, obtained from a one-point time record of an atmospheric boundary layer, reveals a rigorous power law scaling with intermittency exponent μ approx 0.20 over almost the entire inertial range of scales. However, for the related integral moment, the power law scaling is restricted to the upper part of the inertial range only. This observation is explained in terms of the operational surrogacy of the construction of energy dissipation, which influences the behaviour of the correlation function for small separation distances.

  3. Radiation-damage-induced transitions in zircon: Percolation theory applied to hardness and elastic moduli as a function of density

    NASA Astrophysics Data System (ADS)

    Beirau, Tobias; Nix, William D.; Ewing, Rodney C.; Pöllmann, Herbert; Salje, Ekhard K. H.

    2018-05-01

    Two in literature predicted percolation transitions in radiation-damaged zircon (ZrSiO4) were observed experimentally by measurement of the indentation hardness as a function of density and their correlation with the elastic moduli. Percolations occur near 30% and 70% amorphous fractions, where hardness deviates from its linear correlation with the elastic modulus (E), the shear modulus (G) and the bulk modulus (K). The first percolation point pc1 generates a cusp in the hardness versus density evolution, while the second percolation point is seen as a change of slope.

  4. correlcalc: Two-point correlation function from redshift surveys

    NASA Astrophysics Data System (ADS)

    Rohin, Yeluripati

    2017-11-01

    correlcalc calculates two-point correlation function (2pCF) of galaxies/quasars using redshift surveys. It can be used for any assumed geometry or Cosmology model. Using BallTree algorithms to reduce the computational effort for large datasets, it is a parallelised code suitable for running on clusters as well as personal computers. It takes redshift (z), Right Ascension (RA) and Declination (DEC) data of galaxies and random catalogs as inputs in form of ascii or fits files. If random catalog is not provided, it generates one of desired size based on the input redshift distribution and mangle polygon file (in .ply format) describing the survey geometry. It also calculates different realisations of (3D) anisotropic 2pCF. Optionally it makes healpix maps of the survey providing visualization.

  5. Responsiveness of two Persian-versions of shoulder outcome measures following physiotherapy intervention in patients with shoulder disorders.

    PubMed

    Negahban, Hossein; Behtash, Zeinab; Sohani, Soheil Mansour; Salehi, Reza

    2015-01-01

    To identify the ability of the Persian-version of the Shoulder Pain and Disability Index (SPADI) and the Disabilities of the Arm, Shoulder, and Hand (DASH) to detect changes in shoulder function following physiotherapy intervention (i.e. responsiveness) and to determine the change score that indicates a meaningful change in functional ability of the patient (i.e. Minimally Clinically Important Difference (MCID)). A convenient sample of 200 Persian-speaking patients with shoulder disorders completed the SPADI and the DASH at baseline and then again 4 weeks after physiotherapy intervention. Furthermore, patients were asked to rate their global rating of shoulder function at follow-up. The responsiveness was evaluated using two methods: the receiver operating characteristics (ROC) method and the correlation analysis. Two useful statistics extracted from the ROC method are the area under curve (AUC) and the optimal cutoff point called as MCID. Both the SPADI and the DASH showed the AUC of greater than 0.70 (AUC ranges = 0.77-0.82). The best cutoff points (or change scores) for the SPADI-total, SPADI-pain, SPADI-disability and the DASH were 14.88, 26.36, 23.86, and 25.41, respectively. Additionally, moderate to good correlations (Gamma = -0.51 to -0.58) were found between the changes in SPADI/DASH and changes in global rating scale. The Persian SPADI and DASH have adequate responsiveness to clinical changes in patients with shoulder disorders. Moreover, the MCIDs obtained in this study will help the clinicians and researchers to determine if a Persian-speaking patient with shoulder disorder has experienced a true change following a physiotherapy intervention. Implications for Rehabilitation Responsiveness was evaluated using two methods; the receiver operating characteristics (ROC) method and the correlation analysis. The Persian SPADI and DASH can be used as two responsive instruments in both clinical practice and research settings. The MCIDs of 14.88 and 25.41 points obtained for the SPADI-total and DASH indicated that the change scores of at least 14.88 points on the SPADI-total and 25.41 points on the DASH is necessary to certain that a true change has occurred following a physiotherapy intervention.

  6. Evaluation of correlations of flow boiling heat transfer of R22 in horizontal channels.

    PubMed

    Zhou, Zhanru; Fang, Xiande; Li, Dingkun

    2013-01-01

    The calculation of two-phase flow boiling heat transfer of R22 in channels is required in a variety of applications, such as chemical process cooling systems, refrigeration, and air conditioning. A number of correlations for flow boiling heat transfer in channels have been proposed. This work evaluates the existing correlations for flow boiling heat transfer coefficient with 1669 experimental data points of flow boiling heat transfer of R22 collected from 18 published papers. The top two correlations for R22 are those of Liu and Winterton (1991) and Fang (2013), with the mean absolute deviation of 32.7% and 32.8%, respectively. More studies should be carried out to develop better ones. Effects of channel dimension and vapor quality on heat transfer are analyzed, and the results provide valuable information for further research in the correlation of two-phase flow boiling heat transfer of R22 in channels.

  7. Evaluation of Correlations of Flow Boiling Heat Transfer of R22 in Horizontal Channels

    PubMed Central

    Fang, Xiande; Li, Dingkun

    2013-01-01

    The calculation of two-phase flow boiling heat transfer of R22 in channels is required in a variety of applications, such as chemical process cooling systems, refrigeration, and air conditioning. A number of correlations for flow boiling heat transfer in channels have been proposed. This work evaluates the existing correlations for flow boiling heat transfer coefficient with 1669 experimental data points of flow boiling heat transfer of R22 collected from 18 published papers. The top two correlations for R22 are those of Liu and Winterton (1991) and Fang (2013), with the mean absolute deviation of 32.7% and 32.8%, respectively. More studies should be carried out to develop better ones. Effects of channel dimension and vapor quality on heat transfer are analyzed, and the results provide valuable information for further research in the correlation of two-phase flow boiling heat transfer of R22 in channels. PMID:23956695

  8. Teaching Molecular Symmetry of Dihedral Point Groups by Drawing Useful 2D Projections

    ERIC Educational Resources Information Center

    Chen, Lan; Sun, Hongwei; Lai, Chengming

    2015-01-01

    There are two main difficulties in studying molecular symmetry of dihedral point groups. One is locating the C[subscript 2] axes perpendicular to the C[subscript n] axis, while the other is finding the s[subscript]d planes which pass through the C[subscript n] axis and bisect the angles formed by adjacent C[subscript 2] axes. In this paper, a…

  9. Single scan parameterization of space-variant point spread functions in image space via a printed array: the impact for two PET/CT scanners.

    PubMed

    Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J

    2011-05-21

    Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.

  10. Spin Hartree-Fock approach to studying quantum Heisenberg antiferromagnets in low dimensions

    NASA Astrophysics Data System (ADS)

    Werth, A.; Kopietz, P.; Tsyplyatyev, O.

    2018-05-01

    We construct a new mean-field theory for a quantum (spin-1/2) Heisenberg antiferromagnet in one (1D) and two (2D) dimensions using a Hartree-Fock decoupling of the four-point correlation functions. We show that the solution to the self-consistency equations based on two-point correlation functions does not produce any unphysical finite-temperature phase transition, in accord with the Mermin-Wagner theorem, unlike the common approach based on the mean-field equation for the order parameter. The next-neighbor spin-spin correlation functions, calculated within this approach, reproduce closely the strong renormalization by quantum fluctuations obtained via a Bethe ansatz in 1D and a small renormalization of the classical antiferromagnetic state in 2D. The heat capacity approximates with reasonable accuracy the full Bethe ansatz result at all temperatures in 1D. In 2D, we obtain a reduction of the peak height in the heat capacity at a finite temperature that is accessible by high-order 1 /T expansions.

  11. Correlation peak analysis applied to a sequence of images using two different filters for eye tracking model

    NASA Astrophysics Data System (ADS)

    Patrón, Verónica A.; Álvarez Borrego, Josué; Coronel Beltrán, Ángel

    2015-09-01

    Eye tracking has many useful applications that range from biometrics to face recognition and human-computer interaction. The analysis of the characteristics of the eyes has become one of the methods to accomplish the location of the eyes and the tracking of the point of gaze. Characteristics such as the contrast between the iris and the sclera, the shape, and distribution of colors and dark/light zones in the area are the starting point for these analyses. In this work, the focus will be on the contrast between the iris and the sclera, performing a correlation in the frequency domain. The images are acquired with an ordinary camera, which with were taken images of thirty-one volunteers. The reference image is an image of the subjects looking to a point in front of them at 0° angle. Then sequences of images are taken with the subject looking at different angles. These images are processed in MATLAB, obtaining the maximum correlation peak for each image, using two different filters. Each filter were analyzed and then one was selected, which is the filter that gives the best performance in terms of the utility of the data, which is displayed in graphs that shows the decay of the correlation peak as the eye moves progressively at different angle. This data will be used to obtain a mathematical model or function that establishes a relationship between the angle of vision (AOV) and the maximum correlation peak (MCP). This model will be tested using different input images from other subject not contained in the initial database, being able to predict angle of vision using the maximum correlation peak data.

  12. Correlation between physical examination and three-dimensional gait analysis in the assessment of rotational abnormalities in children with cerebral palsy.

    PubMed

    Teixeira, Fernando Borge; Ramalho Júnior, Amancio; Morais Filho, Mauro César de; Speciali, Danielli Souza; Kawamura, Catia Miyuki; Lopes, José Augusto Fernandes; Blumetti, Francesco Camara

    2018-01-01

    Objective To evaluate the correlation between physical examination data concerning hip rotation and tibial torsion with transverse plane kinematics in children with cerebral palsy; and to determine which time points and events of the gait cycle present higher correlation with physical examination findings. Methods A total of 195 children with cerebral palsy seen at two gait laboratories from 2008 and 2016 were included in this study. Physical examination measurements included internal hip rotation, external hip rotation, mid-point hip rotation and the transmalleolar axis angle. Six kinematic parameters were selected for each segment to assess hip rotation and shank-based foot rotation. Correlations between physical examination and kinematic measures were analyzed by Spearman correlation coefficients, and a significance level of 5% was considered. Results Comparing physical examination measurements of hip rotation and hip kinematics, we found moderate to strong correlations for all variables (p<0.001). The highest coefficients were seen between the mid-point hip rotation on physical examination and hip rotation kinematics (rho range: 0.48-0.61). Moderate correlations were also found between the transmalleolar axis angle measurement on physical examination and foot rotation kinematics (rho range 0.44-0.56; p<0.001). Conclusion These findings may have clinical implications in the assessment and management of transverse plane gait deviations in children with cerebral palsy.

  13. Extremal Correlators in the Ads/cft Correspondence

    NASA Astrophysics Data System (ADS)

    D'Hoker, Eric; Freedman, Daniel Z.; Mathur, Samir D.; Matusis, Alec; Rastelli, Leonardo

    The non-renormalization of the 3-point functions

  14. Direct ophthalmoscopy on YouTube: analysis of instructional YouTube videos' content and approach to visualization.

    PubMed

    Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif

    2016-01-01

    Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman's correlation. We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8-14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman's ρ=0.53; P=0.029). Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner's view, and give particular emphasis on fundus examination.

  15. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  16. Longitudinal development of cortical thickness, folding, and fiber density networks in the first 2 years of life.

    PubMed

    Nie, Jingxin; Li, Gang; Wang, Li; Shi, Feng; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2014-08-01

    Quantitatively characterizing the development of cortical anatomical networks during the early stage of life plays an important role in revealing the relationship between cortical structural connection and high-level functional development. The development of correlation networks of cortical-thickness, cortical folding, and fiber-density is systematically analyzed in this article to study the relationship between different anatomical properties during the first 2 years of life. Specifically, longitudinal MR images of 73 healthy subjects from birth to 2 year old are used. For each subject at each time point, its measures of cortical thickness, cortical folding, and fiber density are projected to its cortical surface that has been partitioned into 78 cortical regions. Then, the correlation matrices for cortical thickness, cortical folding, and fiber density at each time point can be constructed, respectively, by computing the inter-regional Pearson correlation coefficient (of any pair of ROIs) across all 73 subjects. Finally, the presence/absence pattern (i.e., binary pattern) of the connection network is constructed from each inter-regional correlation matrix, and its statistical and anatomical properties are adopted to analyze the longitudinal development of anatomical networks. The results show that the development of anatomical network could be characterized differently by using different anatomical properties (i.e., using cortical thickness, cortical folding, or fiber density). Copyright © 2013 Wiley Periodicals, Inc.

  17. Matrix product density operators: Renormalization fixed points and boundary theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cirac, J.I.; Pérez-García, D., E-mail: dperezga@ucm.es; ICMAT, Nicolas Cabrera, Campus de Cantoblanco, 28049 Madrid

    We consider the tensors generating matrix product states and density operators in a spin chain. For pure states, we revise the renormalization procedure introduced in (Verstraete et al., 2005) and characterize the tensors corresponding to the fixed points. We relate them to the states possessing zero correlation length, saturation of the area law, as well as to those which generate ground states of local and commuting Hamiltonians. For mixed states, we introduce the concept of renormalization fixed points and characterize the corresponding tensors. We also relate them to concepts like finite correlation length, saturation of the area law, as well asmore » to those which generate Gibbs states of local and commuting Hamiltonians. One of the main result of this work is that the resulting fixed points can be associated to the boundary theories of two-dimensional topological states, through the bulk-boundary correspondence introduced in (Cirac et al., 2011).« less

  18. Relationship between team assists and win-loss record in The National Basketball Association.

    PubMed

    Melnick, M J

    2001-04-01

    Using research methodology for analysis of secondary data, statistical data for five National Basketball Association (NBA) seasons (1993-1994 to 1997-1998) were examined to test for a relationship between team assists (a behavioral measure of teamwork) and win-loss record. Rank-difference correlation indicated a significant relationship between the two variables, the coefficients ranging from .42 to .71. Team assist totals produced higher correlations with win-loss record than assist totals for the five players receiving the most playing time ("the starters"). A comparison of "assisted team points" and "unassisted team points" in relationship to win-loss record favored the former and strongly suggested that how a basketball team scores points is more important than the number of points it scores. These findings provide circumstantial support for the popular dictum in competitive team sports that "Teamwork Means Success-Work Together, Win Together."

  19. Statistical model of exotic rotational correlations in emergent space-time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Craig; Kwon, Ohkyung; Richardson, Jonathan

    2017-06-06

    A statistical model is formulated to compute exotic rotational correlations that arise as inertial frames and causal structure emerge on large scales from entangled Planck scale quantum systems. Noncommutative quantum dynamics are represented by random transverse displacements that respect causal symmetry. Entanglement is represented by covariance of these displacements in Planck scale intervals defined by future null cones of events on an observer's world line. Light that propagates in a nonradial direction inherits a projected component of the exotic rotational correlation that accumulates as a random walk in phase. A calculation of the projection and accumulation leads to exact predictionsmore » for statistical properties of exotic Planck scale correlations in an interferometer of any configuration. The cross-covariance for two nearly co-located interferometers is shown to depart only slightly from the autocovariance. Specific examples are computed for configurations that approximate realistic experiments, and show that the model can be rigorously tested.« less

  20. Topogrid Derived 10 Meter Resolution Digital Elevation Model of the Shenandoah National Park and Surrounding Region, Virginia

    USGS Publications Warehouse

    Chirico, Peter G.; Tanner, Seth D.

    2004-01-01

    Explanation The purpose of developing a new 10m resolution DEM of the Shenandoah National Park Region was to more accurately depict geologic structure, surfical geology, and landforms of the Shenandoah National Park Region in preparation for automated landform classification. Previously, only a 30m resolution DEM was available through the National Elevation Dataset (NED). During production of the Shenandoah10m DEM of the Park the Geography Discipline of the USGS completed a revised 10m DEM to be included into the NED. However, different methodologies were used to produce the two similar DEMs. The ANUDEM algorithm was used to develop the Shenadoah DEM data. This algorithm allows for the inclusion of contours, streams, rivers, lake and water body polygons as well as spot height data to control the elevation model. A statistical analysis using over 800 National Geodetic Survey (NGS) first and second order vertical control points reveals that the Shenandoah10m DEM, produced as a part of the Appalachian Blue Ridge Landscape project, has a vertical accuracy of ?4.87 meters. The metadata for the 10m NED data reports a vertical accuracy of ?7m. A table listing the NGS control points, the elevation comparison, and the RMSE for the Shenandoah10m DEM is provided. The process of automated terrain classification involves developing statistical signatures from the DEM for each type of surficial deposit and landform type. The signature will be a measure of several characteristics derived from the elevation data including slope, aspect, planform curvature, and profile curvature. The quality of the DEM is of critical importance when extracting terrain signatures. The highest possible horizontal and vertical accuracy is required. The more accurate Shenandoah 10m DEM can now be analyzed and integrated with the geologic observations to yield statistical correlations between the two in the development of landform and surface geology mapping projects.

  1. CALIBRATION OF SEISMIC ATTRIBUTES FOR RESERVOIR CHARACTERIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne D. Pennington; Horacio Acevedo; Aaron Green

    2002-10-01

    The project, ''Calibration of Seismic Attributes for Reservoir Calibration,'' is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, including several that are in final stages of preparation ormore » printing; one of these is a chapter on ''Reservoir Geophysics'' for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along ''phantom'' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and we developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines. The Teal South data set has provided a surprising set of results, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. Additional results were found using the public-domain Waha and Woresham-Bayer data set, and some tests of technologies were made using 2D seismic lines from Michigan and the western Pacific ocean.« less

  2. Calibration of Seismic Attributes for Reservoir Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne D. Pennington

    2002-09-29

    The project, "Calibration of Seismic Attributes for Reservoir Characterization," is now complete. Our original proposed scope of work included detailed analysis of seismic and other data from two to three hydrocarbon fields; we have analyzed data from four fields at this level of detail, two additional fields with less detail, and one other 2D seismic line used for experimentation. We also included time-lapse seismic data with ocean-bottom cable recordings in addition to the originally proposed static field data. A large number of publications and presentations have resulted from this work, inlcuding several that are in final stages of preparation ormore » printing; one of these is a chapter on "Reservoir Geophysics" for the new Petroleum Engineering Handbook from the Society of Petroleum Engineers. Major results from this project include a new approach to evaluating seismic attributes in time-lapse monitoring studies, evaluation of pitfalls in the use of point-based measurements and facies classifications, novel applications of inversion results, improved methods of tying seismic data to the wellbore, and a comparison of methods used to detect pressure compartments. Some of the data sets used are in the public domain, allowing other investigators to test our techniques or to improve upon them using the same data. From the public-domain Stratton data set we have demonstrated that an apparent correlation between attributes derived along 'phantom' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the public-domain Boonsville data set we developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures, to subdivide regional facies determined from logs into productive and non-productive subfacies, and we developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines. The Teal South data set has provided a surprising set of results, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. Additional results were found using the public-domain Waha and Woresham-Bayer data set, and some tests of technologies were made using 2D seismic lines from Michigan and the western Pacific ocean.« less

  3. Development of a Novel Two Dimensional Surface Plasmon Resonance Sensor Using Multiplied Beam Splitting Optics

    PubMed Central

    Hemmi, Akihide; Mizumura, Ryosuke; Kawanishi, Ryuta; Nakajima, Hizuru; Zeng, Hulie; Uchiyama, Katsumi; Kaneki, Noriaki; Imato, Toshihiko

    2013-01-01

    A novel two dimensional surface plasmon resonance (SPR) sensor system with a multi-point sensing region is described. The use of multiplied beam splitting optics, as a core technology, permitted multi-point sensing to be achieved. This system was capable of simultaneously measuring nine sensing points. Calibration curves for sucrose obtained on nine sensing points were linear in the range of 0–10% with a correlation factor of 0.996–0.998 with a relative standard deviation of 0.090–4.0%. The detection limits defined as S/N = 3 were 1.98 × 10−6–3.91 × 10−5 RIU. This sensitivity is comparable to that of conventional SPR sensors. PMID:23299626

  4. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target.

    PubMed

    Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian

    2017-08-01

    Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.

  5. Project description and crowdfunding success: an exploratory study.

    PubMed

    Zhou, Mi Jamie; Lu, Baozhou; Fan, Weiguo Patrick; Wang, G Alan

    2018-01-01

    Existing research on antecedent of funding success mainly focuses on basic project properties such as funding goal, duration, and project category. In this study, we view the process by which project owners raise funds from backers as a persuasion process through project descriptions. Guided by the unimodel theory of persuasion, this study identifies three exemplary antecedents (length, readability, and tone) from the content of project descriptions and two antecedents (past experience and past expertise) from the trustworthy cue of project descriptions. We then investigate their impacts on funding success. Using data collected from Kickstarter, a popular crowdfunding platform, we find that these antecedents are significantly associated with funding success. Empirical results show that the proposed model that incorporated these antecedents can achieve an accuracy of 73 % (70 % in F-measure). The result represents an improvement of roughly 14 percentage points over the baseline model based on informed guessing and 4 percentage points improvement over the mainstream model based on basic project properties (or 44 % improvement of mainstream's performance over informed guessing). The proposed model also has superior true positive and true negative rates. We also investigate the timeliness of project data and find that old project data is gradually becoming less relevant and losing predictive power to newly created projects. Overall, this study provides evidence that antecedents identified from project descriptions have incremental predictive power and can help project owners evaluate and improve the likelihood of funding success.

  6. 75 FR 53347 - Cooperative Agreements Under the Disability Employment Initiative; Solicitation for Grant...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... amendment is to correct language related to the number of points for Project Management to be consistent.... Delete the following text: 4. Project Management (10 points) Add the following text: Project Management... follows: 4. This amendment is to correct language related to the number of points for Project Management...

  7. Water conservation study. Badger Army Ammunition Plant, Baraboo, Wisconsin. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-05-01

    The purpose of this water conservation study is to identify projects which result in energy maintenance and cost savings in the process water distribution system at Badger Army Ammunition Plant (BAAP) in Baraboo, Wisconsin. A leak detection survey was performed on all process water piping with a diameter of 6 inches or greater. The leak detection analysis was performed using a combination of listening devices and preamplified-transducer systems to identify the majority of leak locations. When the location of the leak could not be readily identified using these methods, a leak correlator was used. The leak correlator determines leak locationmore » based on the time it takes for sound to travel from the leak to a waterline connection point.« less

  8. Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods

    NASA Astrophysics Data System (ADS)

    Pervez, M.; Henebry, G. M.

    2010-12-01

    In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.

  9. Scattered radiation doses absorbed by technicians at different distances from X-ray exposure: Experiments on prosthesis.

    PubMed

    Chiang, Hsien-Wen; Liu, Ya-Ling; Chen, Tou-Rong; Chen, Chun-Lon; Chiang, Hsien-Jen; Chao, Shin-Yu

    2015-01-01

    This work aimed to investigate the spatial distribution of scattered radiation doses induced by exposure to the portable X-ray, the C-arm machine, and to simulate the radiologist without a shield of lead clothing, radiation doses absorbed by medical staff at 2 m from the central exposure point. With the adoption of the Rando Phantom, several frequently X-rayed body parts were exposed to X-ray radiation, and the scattered radiation doses were measured by ionization chamber dosimeters at various angles from the patient. Assuming that the central point of the X-ray was located at the belly button, five detection points were distributed in the operation room at 1 m above the ground and 1-2 m from the central point horizontally. The radiation dose measured at point B was the lowest, and the scattered radiation dose absorbed by the prosthesis from the X-ray's vertical projection was 0.07 ±0.03 μGy, which was less than the background radiation levels. The Fluke biomedical model 660-5DE (400 cc) and 660-3DE (4 cc) ion chambers were used to detect air dose at a distance of approximately two meters from the central point. The AP projection radiation doses at point B was the lowest (0.07±0.03 μGy) and the radiation doses at point D was the highest (0.26±0.08 μGy) .Only taking the vertical projection into account, the radiation doses at point B was the lowest (0.52 μGy), and the radiation doses at point E was the highest (4 μGy).The PA projection radiation at point B was the lowest (0.36 μGy) and the radiation doses at point E was the highest(2.77 μGy), occupying 10-32% of the maximum doses. The maximum dose in five directions was nine times to the minimum dose. When the PX and the C-arm machine were used, the radiation doses at a distance of 2 m were attenuated to the background radiation level. The radiologist without a lead shield should stand at point B of patient's feet. Accordingly, teaching materials on radiation safety for radiological interns and clinical technicians were formulated.

  10. RPBS: Rotational Projected Binary Structure for point cloud representation

    NASA Astrophysics Data System (ADS)

    Fang, Bin; Zhou, Zhiwei; Ma, Tao; Hu, Fangyu; Quan, Siwen; Ma, Jie

    2018-03-01

    In this paper, we proposed a novel three-dimension local surface descriptor named RPBS for point cloud representation. First, points cropped form the query point within a predefined radius is regard as a local surface patch. Then pose normalization is done to the local surface to equip our descriptor with the invariance to rotation transformation. To obtain more information about the cropped surface, multi-view representation is formed by successively rotating it along the coordinate axis. Further, orthogonal projections to the three coordinate plane are adopted to construct two-dimension distribution matrixes, and binarization is applied to each matrix by following the rule that whether the grid is occupied, if yes, set the grid one, otherwise zero. We calculate the binary maps from all the viewpoints and concatenate them together as the final descriptor. Comparative experiments for evaluating our proposed descriptor is conducted on the standard dataset named Bologna with several state-of-the-art 3D descriptors, and results show that our descriptor achieves the best performance on feature matching experiments.

  11. Outcrop Analysis of the Cretaceous Mesaverde Group: Jicarilla Apache Reservation, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridgley, Jennie; Dunbar, Robin Wright

    2001-04-24

    Field work for this project was conducted during July and April 1998, at which time fourteen measured sections were described and correlated on or adjacent to Jicarilla Apache Reservation lands. A fifteenth section, described east of the main field area, is included in this report, although its distant location precluded use in the correlations and cross sections presented herein. Ground-based photo mosaics were shot for much of the exposed Mesaverde outcrop belt and were used to assist in correlation. Outcrop gamma-ray surveys at six of the fifteen measured sections using a GAD-6 scintillometer was conducted. The raw gamma-ray data aremore » included in this report, however, analysis of those data is part of the ongoing Phase Two of this project.« less

  12. Effective Perron-Frobenius eigenvalue for a correlated random map

    NASA Astrophysics Data System (ADS)

    Pool, Roman R.; Cáceres, Manuel O.

    2010-09-01

    We investigate the evolution of random positive linear maps with various type of disorder by analytic perturbation and direct simulation. Our theoretical result indicates that the statistics of a random linear map can be successfully described for long time by the mean-value vector state. The growth rate can be characterized by an effective Perron-Frobenius eigenvalue that strongly depends on the type of correlation between the elements of the projection matrix. We apply this approach to an age-structured population dynamics model. We show that the asymptotic mean-value vector state characterizes the population growth rate when the age-structured model has random vital parameters. In this case our approach reveals the nontrivial dependence of the effective growth rate with cross correlations. The problem was reduced to the calculation of the smallest positive root of a secular polynomial, which can be obtained by perturbations in terms of Green’s function diagrammatic technique built with noncommutative cumulants for arbitrary n -point correlations.

  13. Calling, Vocational Development, and Well Being: A Longitudinal Study of Medical Students

    ERIC Educational Resources Information Center

    Duffy, Ryan D.; Manuel, R. Stephen; Borges, Nicole J.; Bott, Elizabeth M.

    2011-01-01

    The present study investigated the relation of calling to the vocational development and well-being of a sample of medical students. Students were surveyed at two time points: prior to beginning the first year of medical school and prior to beginning the third year of medical school. At each time point, calling moderately correlated with positive…

  14. Archaeomagnetic studies in central Mexico—dating of Mesoamerican lime-plasters

    NASA Astrophysics Data System (ADS)

    Hueda-Tanabe, Y.; Soler-Arechalde, A. M.; Urrutia-Fucugauchi, J.; Barba, L.; Manzanilla, L.; Rebolledo-Vieyra, M.; Goguitchaichvili, A.

    2004-11-01

    For the first time results of an archaeomagnetic study of unburned lime-plasters from Teotihuacan and Tenochtitlan in central Mesoamerica are presented. Plasters made of lime, lithic clasts and water, appear during the Formative Period and were used for a variety of purposes in floors, sculptures, ceramics and supporting media for mural paintings in the Oaxaca and Maya area. In Central Mexico, grinded volcanic scoria rich in iron minerals is incorporated into the lime-plasters mixture. Samples were selected from two archaeological excavation projects in the Teopancazco residential compound of Teotihuacan and the large multi-stage structure of Templo Mayor in Tenochtitlan, where chronological information is available. The intensity of remanent magnetization (natural remanent magnetization (NRM)) and low-field susceptibility are weak reflecting low relative content of magnetic minerals. NRM directions are well grouped and alternating field demagnetization shows single or two-component magnetizations. Rockmagnetic experiments point to fine-grained titanomagnetites with pseudo-single domain behavior. Anisotropy of magnetic susceptibility (AMS) measurements document a depositional fabric, with normal to free-surface minimum AMS axes. Characteristic mean site directions were correlated to the paleosecular variation curve for Mesoamerica. Data from Templo Mayor reflect recent tilting of the structures. Teopancazco mean site declinations show good correspondence with the reference curve, in agreement with the radiocarbon dating. Dates for four stages of Teotihuacan occupancy based on the study of lime-plasters range from AD 350 to 550. A date for a possible Mazapa occupation around AD 850 or 950 is also suggested based on the archaeomagnetic correlation. The archaeomagnetic record of a plaster floor in Teopancazco differed from the other nearby sites pointing to a thermoremanent magnetization; comparison with the reference curve suggests dates around AD 1375 or 1415. The burning of the stucco floor likely occurred during a late re-occupation of the site by the Aztecs. Our results suggest that archaeomagnetic dating can be applied to lime-plasters, which are materials widely employed in Mesoamerica.

  15. Predicting degree of benefit from adjuvant trastuzumab in NSABP trial B-31.

    PubMed

    Pogue-Geile, Katherine L; Kim, Chungyeul; Jeong, Jong-Hyeon; Tanaka, Noriko; Bandos, Hanna; Gavin, Patrick G; Fumagalli, Debora; Goldstein, Lynn C; Sneige, Nour; Burandt, Eike; Taniyama, Yusuke; Bohn, Olga L; Lee, Ahwon; Kim, Seung-Il; Reilly, Megan L; Remillard, Matthew Y; Blackmon, Nicole L; Kim, Seong-Rim; Horne, Zachary D; Rastogi, Priya; Fehrenbacher, Louis; Romond, Edward H; Swain, Sandra M; Mamounas, Eleftherios P; Wickerham, D Lawrence; Geyer, Charles E; Costantino, Joseph P; Wolmark, Norman; Paik, Soonmyung

    2013-12-04

    National Surgical Adjuvant Breast and Bowel Project (NSABP) trial B-31 suggested the efficacy of adjuvant trastuzumab, even in HER2-negative breast cancer. This finding prompted us to develop a predictive model for degree of benefit from trastuzumab using archived tumor blocks from B-31. Case subjects with tumor blocks were randomly divided into discovery (n = 588) and confirmation cohorts (n = 991). A predictive model was built from the discovery cohort through gene expression profiling of 462 genes with nCounter assay. A predefined cut point for the predictive model was tested in the confirmation cohort. Gene-by-treatment interaction was tested with Cox models, and correlations between variables were assessed with Spearman correlation. Principal component analysis was performed on the final set of selected genes. All statistical tests were two-sided. Eight predictive genes associated with HER2 (ERBB2, c17orf37, GRB7) or ER (ESR1, NAT1, GATA3, CA12, IGF1R) were selected for model building. Three-dimensional subset treatment effect pattern plot using two principal components of these genes was used to identify a subset with no benefit from trastuzumab, characterized by intermediate-level ERBB2 and high-level ESR1 mRNA expression. In the confirmation set, the predefined cut points for this model classified patients into three subsets with differential benefit from trastuzumab with hazard ratios of 1.58 (95% confidence interval [CI] = 0.67 to 3.69; P = .29; n = 100), 0.60 (95% CI = 0.41 to 0.89; P = .01; n = 449), and 0.28 (95% CI = 0.20 to 0.41; P < .001; n = 442; P(interaction) between the model and trastuzumab < .001). We developed a gene expression-based predictive model for degree of benefit from trastuzumab and demonstrated that HER2-negative tumors belong to the moderate benefit group, thus providing justification for testing trastuzumab in HER2-negative patients (NSABP B-47).

  16. Predicting Degree of Benefit From Adjuvant Trastuzumab in NSABP Trial B-31

    PubMed Central

    Pogue-Geile, Katherine L.; Kim, Chungyeul; Jeong, Jong-Hyeon; Tanaka, Noriko; Bandos, Hanna; Gavin, Patrick G.; Fumagalli, Debora; Goldstein, Lynn C.; Sneige, Nour; Burandt, Eike; Taniyama, Yusuke; Bohn, Olga L.; Lee, Ahwon; Kim, Seung-Il; Reilly, Megan L.; Remillard, Matthew Y.; Blackmon, Nicole L.; Kim, Seong-Rim; Horne, Zachary D.; Rastogi, Priya; Fehrenbacher, Louis; Romond, Edward H.; Swain, Sandra M.; Mamounas, Eleftherios P.; Wickerham, D. Lawrence; Geyer, Charles E.; Costantino, Joseph P.; Wolmark, Norman

    2013-01-01

    Background National Surgical Adjuvant Breast and Bowel Project (NSABP) trial B-31 suggested the efficacy of adjuvant trastuzumab, even in HER2-negative breast cancer. This finding prompted us to develop a predictive model for degree of benefit from trastuzumab using archived tumor blocks from B-31. Methods Case subjects with tumor blocks were randomly divided into discovery (n = 588) and confirmation cohorts (n = 991). A predictive model was built from the discovery cohort through gene expression profiling of 462 genes with nCounter assay. A predefined cut point for the predictive model was tested in the confirmation cohort. Gene-by-treatment interaction was tested with Cox models, and correlations between variables were assessed with Spearman correlation. Principal component analysis was performed on the final set of selected genes. All statistical tests were two-sided. Results Eight predictive genes associated with HER2 (ERBB2, c17orf37, GRB7) or ER (ESR1, NAT1, GATA3, CA12, IGF1R) were selected for model building. Three-dimensional subset treatment effect pattern plot using two principal components of these genes was used to identify a subset with no benefit from trastuzumab, characterized by intermediate-level ERBB2 and high-level ESR1 mRNA expression. In the confirmation set, the predefined cut points for this model classified patients into three subsets with differential benefit from trastuzumab with hazard ratios of 1.58 (95% confidence interval [CI] = 0.67 to 3.69; P = .29; n = 100), 0.60 (95% CI = 0.41 to 0.89; P = .01; n = 449), and 0.28 (95% CI = 0.20 to 0.41; P < .001; n = 442; P interaction between the model and trastuzumab < .001). Conclusions We developed a gene expression–based predictive model for degree of benefit from trastuzumab and demonstrated that HER2-negative tumors belong to the moderate benefit group, thus providing justification for testing trastuzumab in HER2-negative patients (NSABP B-47). PMID:24262440

  17. Exploring the Correlation Between Nontraditional Variables and Student Success: A Longitudinal Study.

    PubMed

    Strickland, Haley Perkins; Cheshire, Michelle Haney

    2017-06-01

    The purpose of this project was to determine whether a correlation exists between the traditional admission criteria of grade point averages with the potential admission criteria of emotional intelligence (EI) scores or critical thinking (CT) scores to predict upper division student outcomes. A quantitative, longitudinal design was selected to examine the identified variables to predict undergraduate student success. The recruiting sample included a convenience sample drawn from 112 junior-level undergraduate nursing students beginning their first of a five-semester nursing program. EI and HESI ® CT scores did not significantly correlate with main analysis variables. Although EI and CT scores were not significant in this study, it remains vital to incorporate EI and CT activities throughout the curriculum to develop students' ability to think like a nurse and, therefore, be successful in nursing practice. [J Nurs Educ. 2017;56(6):351-355.]. Copyright 2017, SLACK Incorporated.

  18. Stringy horizons and generalized FZZ duality in perturbation theory

    NASA Astrophysics Data System (ADS)

    Giribet, Gaston

    2017-02-01

    We study scattering amplitudes in two-dimensional string theory on a black hole bakground. We start with a simple derivation of the Fateev-Zamolodchikov-Zamolodchikov (FZZ) duality, which associates correlation functions of the sine-Liouville integrable model on the Riemann sphere to tree-level string amplitudes on the Euclidean two-dimensional black hole. This derivation of FZZ duality is based on perturbation theory, and it relies on a trick originally due to Fateev, which involves duality relations between different Selberg type integrals. This enables us to rewrite the correlation functions of sine-Liouville theory in terms of a special set of correlators in the gauged Wess-Zumino-Witten (WZW) theory, and use this to perform further consistency checks of the recently conjectured Generalized FZZ (GFZZ) duality. In particular, we prove that n-point correlation functions in sine-Liouville theory involving n - 2 winding modes actually coincide with the correlation functions in the SL(2,R)/U(1) gauged WZW model that include n - 2 oscillator operators of the type described by Giveon, Itzhaki and Kutasov in reference [1]. This proves the GFZZ duality for the case of tree level maximally winding violating n-point amplitudes with arbitrary n. We also comment on the connection between GFZZ and other marginal deformations previously considered in the literature.

  19. Noise correlation in CBCT projection data and its application for noise reduction in low-dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hua; Ouyang, Luo; Wang, Jing, E-mail: jhma@smu.edu.cn, E-mail: jing.wang@utsouthwestern.edu

    2014-03-15

    Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, the authors systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam onboard CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are nonzero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second-order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. At the 2.0 mm resolution level in the axial-plane noise resolution tradeoff analysis, the noise level of the PWLS-Cor reconstruction is 6.3% lower than that of the PWLS-Dia reconstruction. Conclusions: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.« less

  20. Small-scale response in an avian community to a large-scale thinning project in the southwestern United States

    Treesearch

    Karen E. Bagne; Deborah M. Finch

    2009-01-01

    Avian populations were monitored using point counts from 2002 to 2007, two years before and four years after a 2800 ha fuel reduction project. The study area was within a ponderosa pine forest near Santa Fe, New Mexico, USA. Adjacent unthinned areas were also monitored as a reference for population variation related to other factors. For individual bird species...

  1. Determining the anaerobic threshold in water aerobic exercises: a comparison between the heart rate deflection point and the ventilatory method.

    PubMed

    Alberton, C L; Kanitz, A C; Pinto, S S; Antunes, A H; Finatto, P; Cadore, E L; Kruel, L F M

    2013-08-01

    The aim of this study was to compare the cardiorespiratory variables corresponding to the anaerobic threshold (AT) between different water-based exercises using two methods of determining the AT, the heart rate deflection point and ventilatory method, and to correlate the variables in both methods. Twenty young women performed three exercise sessions in the water. Maximal tests were performed in the water-based exercises stationary running, frontal kick and cross country skiing. The protocol started at a rate of 80 cycles per minute (cycle.min-1) for 2 min with subsequent increments of 10 cycle.min-1 every minute until exhaustion, with measurements of heart rate, oxygen uptake and ventilation throughout test. After, the two methods were used to determine the values of these variables corresponding to the AT for each of the exercises. Comparisons were made using two-way ANOVA for repeated measures with Bonferroni's post hoc test. To correlate the same variables determined by the two methods, the intra-class correlation coefficient test (ICC) was used. For all the variables, no significant differences were found between the methods of determining the AT and the three exercises. Moreover, the ICC values of each variable determined by the two methods were high and significant. The estimation of the heart rate deflection point can be used as a simple and practical method of determining the AT, which can be used when prescribing these exercises. In addition, these cardiorespiratory parameters may be determined performing the test with only one of the evaluated exercises, since there were no differences in the evaluated variables.

  2. Lip-nasal aesthetics following Le Fort I osteotomy.

    PubMed

    Rosen, H M

    1988-02-01

    Forty-one patients undergoing Le Fort I osteotomy for superior and/or anterior repositioning of the maxilla were prospectively studied for changes in soft-tissue morphology of the nasomaxillary region. Nasal parameters studied were changes in interalar rim width and nasal tip projection. It was observed that alar rim width increases with anterior and/or superior repositioning of the maxilla, but increases in nasal tip projection occur only when there is an anterior vector of maxillary movement. These nasal changes could not be quantitatively correlated to magnitude of maxillary movement. Lip changes studied were the horizontal displacement at the vermilion border and subnasale versus that of the incisal edge and point A, respectively, when the maxilla is sagittally advanced and the vertical shortening of the lip versus that of the incisal edge when the maxilla is shortened. Using linear regression analysis, horizontal displacement of the upper lip at the vermilion border was 0.82 +/- 0.13 mm for every 1 mm of maxillary advancement at the incisal edge (p less than 0.001) and 0.51 +/- 0.13 at the subnasale for every 1 mm of maxillary advancement at point A (p less than 0.001). Eighty percent of patients undergoing maxillary intrusive procedures had lip shortening ranging from 20 to 50 percent of the vertical maxillary reduction. Surprisingly, no statistically significant correlation could be demonstrated for lip shortening versus extent of vertical maxillary reduction. Previous literature in disagreement with these findings is discussed. Guidelines for treatment planning utilizing these data are suggested.

  3. [An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].

    PubMed

    Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu

    2016-04-01

    The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.

  4. Maximal qubit violation of n-locality inequalities in a star-shaped quantum network

    NASA Astrophysics Data System (ADS)

    Andreoli, Francesco; Carvacho, Gonzalo; Santodonato, Luca; Chaves, Rafael; Sciarrino, Fabio

    2017-11-01

    Bell's theorem was a cornerstone for our understanding of quantum theory and the establishment of Bell non-locality played a crucial role in the development of quantum information. Recently, its extension to complex networks has been attracting growing attention, but a deep characterization of quantum behavior is still missing for this novel context. In this work we analyze quantum correlations arising in the bilocality scenario, that is a tripartite quantum network where the correlations between the parties are mediated by two independent sources of states. First, we prove that non-bilocal correlations witnessed through a Bell-state measurement in the central node of the network form a subset of those obtainable by means of a local projective measurement. This leads us to derive the maximal violation of the bilocality inequality that can be achieved by arbitrary two-qubit quantum states and arbitrary local projective measurements. We then analyze in details the relation between the violation of the bilocality inequality and the CHSH inequality. Finally, we show how our method can be extended to the n-locality scenario consisting of n two-qubit quantum states distributed among n+1 nodes of a star-shaped network.

  5. Correlates of Externalizing Behavior Symptoms among Youth within Two Impoverished, Urban Communities

    ERIC Educational Resources Information Center

    Gopalan, Geetha; Cavaleri, Mary A.; Bannon, William M.; McKay, Mary M.

    2009-01-01

    This study examines whether risk factors associated with child externalizing behavior symptoms differ between two similar low-income, urban communities, using baseline parent data of 154 African American youth (ages 9-15) participating in the Collaborative HIV-Prevention and Adolescent Mental Health Project (CHAMP) family program. Separate…

  6. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  7. Correlation Function Analysis of Fiber Networks: Implications for Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Martinez-Garcia, Jorge; Braginsky, Leonid; Shklover, Valery; Lawson, John W.

    2011-01-01

    The heat transport in highly porous fiber structures is investigated. The fibers are supposed to be thin, but long, so that the number of the inter-fiber connections along each fiber is large. We show that the effective conductivity of such structures can be found from the correlation length of the two-point correlation function of the local conductivities. Estimation of the parameters, determining the conductivity, from the 2D images of the structures is analyzed.

  8. Continuous quantum measurement with independent detector cross correlations.

    PubMed

    Jordan, Andrew N; Büttiker, Markus

    2005-11-25

    We investigate the advantages of using two independent, linear detectors for continuous quantum measurement. For single-shot measurement, the detection process may be quantum limited if the detectors are twins. For weak continuous measurement, cross correlations allow a violation of the Korotkov-Averin bound for the detector's signal-to-noise ratio. The joint weak measurement of noncommuting observables is also investigated, and we find the cross correlation changes sign as a function of frequency, reflecting a crossover from incoherent relaxation to coherent, out of phase oscillations. Our results are applied to a double quantum-dot charge qubit, simultaneously measured by two quantum point contacts.

  9. Direction-Sensitive Hand-Held Gamma-Ray Spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhopadhyay, S.

    2012-10-04

    A novel, light-weight, hand-held gamma-ray detector with directional sensitivity is being designed. The detector uses a set of multiple rings around two cylindrical surfaces, which provides precise location of two interaction points on two concentric cylindrical planes, wherefrom the source location can be traced back by back projection and/or Compton imaging technique. The detectors are 2.0 × 2.0 mm europium-doped strontium iodide (SrI2:Eu2+) crystals, whose light output has been measured to exceed 120,000 photons/MeV, making it one of the brightest scintillators in existence. The crystal’s energy resolution, less than 3% at 662 keV, is also excellent, and the response ismore » highly linear over a wide range of gamma-ray energies. The emission of SrI2:Eu2+ is well matched to both photo-multiplier tubes and blue-enhanced silicon photodiodes. The solid-state photomultipliers used in this design (each 2.0 × 2.0 mm) are arrays of active pixel sensors (avalanche photodiodes driven beyond their breakdown voltage in reverse bias); each pixel acts as a binary photon detector, and their summed output is an analog representation of the total photon energy, while the individual pixel accurately defines the point of interaction. A simple back-projection algorithm involving cone-surface mapping is being modeled. The back projection for an event cone is a conical surface defining the possible location of the source. The cone axis is the straight line passing through the first and second interaction points.« less

  10. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  11. Measurement and correlation of jet fuel viscosities at low temperatures

    NASA Technical Reports Server (NTRS)

    Schruben, D. L.

    1985-01-01

    Apparatus and procedures were developed to measure jet fuel viscosity for eight current and future jet fuels at temperatures from ambient to near -60 C by shear viscometry. Viscosity data showed good reproducibility even at temperatures a few degrees below the measured freezing point. The viscosity-temperature relationship could be correlated by two linear segments when plotted as a standard log-log type representation (ASTM D 341). At high temperatures, the viscosity-temperature slope is low. At low temperatures, where wax precipitation is significant, the slope is higher. The breakpoint between temperature regions is the filter flow temperature, a fuel characteristic approximated by the freezing point. A generalization of the representation for the eight experimental fuels provided a predictive correlation for low-temperature viscosity, considered sufficiently accurate for many design or performance calculations.

  12. Multitime correlation functions in nonclassical stochastic processes

    NASA Astrophysics Data System (ADS)

    Krumm, F.; Sperling, J.; Vogel, W.

    2016-06-01

    A general method is introduced for verifying multitime quantum correlations through the characteristic function of the time-dependent P functional that generalizes the Glauber-Sudarshan P function. Quantum correlation criteria are derived which identify quantum effects for an arbitrary number of points in time. The Magnus expansion is used to visualize the impact of the required time ordering, which becomes crucial in situations when the interaction problem is explicitly time dependent. We show that the latter affects the multi-time-characteristic function and, therefore, the temporal evolution of the nonclassicality. As an example, we apply our technique to an optical parametric process with a frequency mismatch. The resulting two-time-characteristic function yields full insight into the two-time quantum correlation properties of such a system.

  13. Vernal Point and Anthropocene

    NASA Astrophysics Data System (ADS)

    Chavez-Campos, Teodosio; Chavez S, Nadia; Chavez-Sumarriva, Israel

    2014-05-01

    The time scale was based on the internationally recognized formal chronostratigraphical /geochronological subdivisions of time: The Phanerozoic Eonathem/Eon; the Cenozoic Erathem/Era; the Quaternary System/Period; the Pleistocene and Holocene Series/Epoch. The Quaternary was divided into: (1) The Pleistocene that was characterized by cycles of glaciations (intervals between 40,000 and 100,000 years). (2) The Holocene that was an interglacial period that began about 12,000 years ago. It was believed that the Milankovitch cycles (eccentricity, axial tilt and the precession of the equinoxes) were responsible for the glacial and interglacial Holocene periods. The magnetostratigraphic units have been widely used for global correlations valid for Quaternary. The gravitational influence of the sun and moon on the equatorial bulges of the mantle of the rotating earth causes the precession of the earth. The retrograde motion of the vernal point through the zodiacal band is 26,000 years. The Vernal point passes through each constellation in an average of 2000 years and this period of time was correlated to Bond events that were North Atlantic climate fluctuations occurring every ≡1,470 ± 500 years throughout the Holocene. The vernal point retrogrades one precessional degree approximately in 72 years (Gleissberg-cycle) and approximately enters into the Aquarius constellation on March 20, 1940. On earth this entry was verify through: a) stability of the magnetic equator in the south central zone of Peru and in the north zone of Bolivia, b) the greater intensity of equatorial electrojet (EEJ) in Peru and Bolivia since 1940. With the completion of the Holocene and the beginning of the Anthropocene (widely popularized by Paul Crutzen) it was proposed the date of March 20, 1940 as the beginning of the Anthropocene. The date proposed was correlated to the work presented in IUGG (Italy 2007) with the title "Cusco base meridian for the study of geophysical data"; Cusco was proposed as a prime meridian that was based on: (1) the new prime meridian (72º W == 0º) was parallel to the Andes and its projection the meridian (108° E == 180º) intersects the Tibetan plate (Asia). (2) On earth these two areas present the greatest thickness of the crust with an average depth of 70 kilometers. The aim was to synchronize the earth sciences phenomena (e.g. geology, geophysics, etc.). During the Holocene the vernal point retrograde 12,000 years and enters into the Aquarius constellation on March 20, 1940. That date was proposed as the beginning of the Anthropocene because on that date proposed the vernal point passes from the Pisces constellation to Aquarius constellation, besides that event around the date proposed, the Second World War begun. This event was a global change in the earth. The base of the Anthropocene was defined by the passage of the vernal point from the Pisces Constellation to the Aquarius constellation.

  14. Calm Multi-Baryon Operators

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André

    2018-03-01

    There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.

  15. Non-destructive evaluation of containment walls in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Garnier, V.; Payan, C.; Lott, M.; Ranaivomanana, N.; Balayssac, J. P.; Verdier, J.; Larose, E.; Zhang, Y.; Saliba, J.; Boniface, A.; Sbartai, Z. M.; Piwakowski, B.; Ciccarone, C.; Hafid, H.; Henault, J. M.; Buffet, F. Ouvrier

    2017-02-01

    Two functions are regularly tested on containment walls in order to anticipate a possible accident. The first is mechanical to resist a possible internal over-pressure and the second is to prevent leakage. The AAPR reference accident is the rupture of a pipe in the primary circuit of a nuclear plant. In this case, the pressure and temperature can reach 5 bar and 180°C in 20 seconds. The national project `Non-destructive testing of the containment structures of nuclear plants' aims at studying the non-destructive techniques capable to evaluate the concrete properties and its damaging and cracks. This 4-year-project is segmented into two parts. The first consists in developing and selecting the most relevant NDEs in the laboratory to reach these goals. These evaluations are developed in conditions representing the real conditions of the stresses generated during ten-yearly visits of the plants or those related to an accident. The second part consists in applying the selected techniques to two containment structures under pressure. The first structure is proposed by ONERA and the second is a mockup of a containment wall on a 1/3 scale made by EDF within the VeRCoRs project. Communication is focused on the part of the project that concerns the damage and crack process characterization by means of NDT. The tests are done in 3 or 4 points bending in order to study the cracks' generation, their propagation, as well as their opening and closing. The main ultrasonic techniques developed concern linear or non-linear acoustic: acoustic emission [1], Locadiff [2], energy diffusion, surface wave's velocity and attenuation, DAET [3]. The recorded data contribute to providing the mapping of the investigated parameters, either in volume, in surface or globally. Digital image correlation is an important additional asset to validate the coherence of the data. The spatial normalization of the data in the specimen space allows proposing algorithms on the combination of the experimental data. The tests results are presented and they show the capacity and the limits of the evaluation of the volume, surface or global data. A data fusion procedure is associated with these results.

  16. How Accurately Can We Measure Galaxy Environment at High Redshift Using Only Photometric Redshifts?

    NASA Astrophysics Data System (ADS)

    Florez, Jonathan; Jogee, Shardha; Sherman, Sydney; Papovich, Casey J.; Finkelstein, Steven L.; Stevans, Matthew L.; Kawinwanichakij, Lalitwadee; Ciardullo, Robin; Gronwall, Caryl; SHELA/HETDEX

    2017-06-01

    We use a powerful synergy of six deep photometric surveys (Herschel SPIRE, Spitzer IRAC, NEWFIRM K-band, DECam ugriz, and XMM X-ray) and a future optical spectroscopic survey (HETDEX) in the Stripe 82 field to study galaxy evolution during the 1.9 < z < 3.5 epoch when cosmic star formation and black hole activity peaked, and protoclusters began to collapse. With an area of 24 sq. degrees, a sample size of ~ 0.8 million galaxies complete in stellar mass above M* ~ 10^10 solar masses, and a comoving volume of ~ 0.45 Gpc^3, our study will allow us to make significant advancements in understanding the connection between galaxies and their respective dark matter components. In this poster, we characterize how robustly we can measure environment using only our photometric redshifts. We compare both local and large-scale measures of environment (e.g., projected two-point correlation function, projected nearest neighbor densities, and galaxy counts within some projected aperture) at different photometric redshifts to cosmological simulations in order to quantify the uncertainty in our estimates of environment. We also explore how robustly one can recover the variation of galaxy properties with environment, when using only photometric redshifts. In the era of large photometric surveys, this work has broad implications for studies addressing the impact of environment on galaxy evolution at early cosmic epochs. We acknowledge support from NSF grants AST-1614798, AST-1413652 and NSF GRFP grant DGE-1610403.

  17. On the Space-Time Structure of Sheared Turbulence

    NASA Astrophysics Data System (ADS)

    de Maré, Martin; Mann, Jakob

    2016-09-01

    We develop a model that predicts all two-point correlations in high Reynolds number turbulent flow, in both space and time. This is accomplished by combining the design philosophies behind two existing models, the Mann spectral velocity tensor, in which isotropic turbulence is distorted according to rapid distortion theory, and Kristensen's longitudinal coherence model, in which eddies are simultaneously advected by larger eddies as well as decaying. The model is compared with data from both observations and large-eddy simulations and is found to predict spatial correlations comparable to the Mann spectral tensor and temporal coherence better than any known model. Within the developed framework, Lagrangian two-point correlations in space and time are also predicted, and the predictions are compared with measurements of isotropic turbulence. The required input to the models, which are formulated as spectral velocity tensors, can be estimated from measured spectra or be derived from the rate of dissipation of turbulent kinetic energy, the friction velocity and the mean shear of the flow. The developed models can, for example, be used in wind-turbine engineering, in applications such as lidar-assisted feed forward control and wind-turbine wake modelling.

  18. Archeological Data Recovery by Controlled Surface Collection in the Portion of 23SO496 to be Adversely Affected by the Castor River Enlargement Project, Stoddard County, Missouri

    DTIC Science & Technology

    1990-04-01

    Breckenridge and Tom’s Brook shelters). During this long period a large number of different projectile point types were produced (i.e., Rice Lobed...Big Sandy, Graham Cave, Kirk Comer Notched, White River Archaic, Hidden Valley Stemmed, Hardin Barbed, Searcy, Rice Lanceolate, Jakie Stemmed, and...point did not exhibit basal grinding); one was a Middle Archaic point similar to the Rice Lobed; two were Late Archaic Rice Sidenotched; five were

  19. Double-time correlation functions of two quantum operations in open systems

    NASA Astrophysics Data System (ADS)

    Ban, Masashi

    2017-10-01

    A double-time correlation function of arbitrary two quantum operations is studied for a nonstationary open quantum system which is in contact with a thermal reservoir. It includes a usual correlation function, a linear response function, and a weak value of an observable. Time evolution of the correlation function can be derived by means of the time-convolution and time-convolutionless projection operator techniques. For this purpose, a quasidensity operator accompanied by a fictitious field is introduced, which makes it possible to derive explicit formulas for calculating a double-time correlation function in the second-order approximation with respect to a system-reservoir interaction. The derived formula explicitly shows that the quantum regression theorem for calculating the double-time correlation function cannot be used if a thermal reservoir has a finite correlation time. Furthermore, the formula is applied for a pure dephasing process and a linear dissipative process. The quantum regression theorem and the the Leggett-Garg inequality are investigated for an open two-level system. The results are compared with those obtained by exact calculation to examine whether the formula is a good approximation.

  20. Wet, Dry, Dim, or Bright? The Future of Water Resources in North Texas

    NASA Astrophysics Data System (ADS)

    Brikowski, T. H.

    2009-12-01

    Future water resource availability in North Texas (Dallas-Ft. Worth Metroplex) is likely to be limited by the combined impact of decadal-scale and longer term climate changes. Two decadal precipitation anomalies are statistically distinguishable in the historical record (dry/wet, Table 1). These correspond temporally with the onset of global dimming/brightening events (hydrologic cycle retardation/acceleration) respectively (Table 1). Surface water hydrologic parameters are variably correlated with these events, depending on the degree of time-integration of each process. Precipitation correlates most strongly with the decadal anomalies. Runoff changes during these periods were magnified relative to precipitation changes, presumably an effect of soil moisture changes, and over the basin as a whole correlate best with the global events. Palmer Drought Severity Index (PDSI) attempts to capture such effects, and also correlates most strongly with the global events. The most important time-integrators of the system, reservoirs, show mixed correlation in terms of total storage with the decadal and longer term climate periods. Reservoir flood releases (excess storage) correlate with decadal precipitation anomalies, in part reflecting short-term consumption influences. Major reservoirs in the area post-date the dry period, precluding direct evaluation of sustainability from historical records. Historical correlations versus PDSI can be combined with climate-model based PDSI projections to evaluate future sustainability. Climate projections based on a mean of 19 IPCC intermediate scenario (SRESa1b) models indicate an approximately 10% reduction in mean annual precipitation, and warming of 2oC by 2050 in this region. Steady lowering of mean annual PDSI results, with a 50% probability that annual PDSI will average -0.5 by 2050. Average climate will move from humid (Aridity Index=35) to semi-humid (AI=27), and runoff can be expected to decline accordingly. Probability of a continuous two-year drought, historically sufficient to trigger Stage 3 drought restrictions, more than doubles to 15%/yr by 2050. Based on least-squares fit of historical PDSI and streamflow, median predicted watershed runoff declines by 23%. This reduction brings projected reservoir input to approximately the same value as current annual consumption from those reservoirs. These projected reservoir inflow changes would limit water supply sustainability in North Texas. Inflow declines are similar whether caused by recurrence of observed decadal precipitation variations or long term climate change. The magnitude of these declines (20%) is similar to projected shortfalls based only on population growth by 2050. Evidently both a serious conservation program and currently planned water importation projects will be required to maintain water supply in North Texas.Table 1: Departures from mean and probability that change is random for indicated climate periods

  1. The VIMOS Public Extragalactic Redshift Survey (VIPERS) . Luminosity and stellar mass dependence of galaxy clustering at 0.5 < z < 1.1

    NASA Astrophysics Data System (ADS)

    Marulli, F.; Bolzonella, M.; Branchini, E.; Davidzon, I.; de la Torre, S.; Granett, B. R.; Guzzo, L.; Iovino, A.; Moscardini, L.; Pollo, A.; Abbas, U.; Adami, C.; Arnouts, S.; Bel, J.; Bottini, D.; Cappi, A.; Coupon, J.; Cucciati, O.; De Lucia, G.; Fritz, A.; Franzetti, P.; Fumana, M.; Garilli, B.; Ilbert, O.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; McCracken, H. J.; Paioro, L.; Polletta, M.; Schlagenhaufer, H.; Scodeggio, M.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Burden, A.; Di Porto, C.; Marchetti, A.; Marinoni, C.; Mellier, Y.; Nichol, R. C.; Peacock, J. A.; Percival, W. J.; Phleps, S.; Wolk, M.; Zamorani, G.

    2013-09-01

    Aims: We investigate the dependence of galaxy clustering on luminosity and stellar mass in the redshift range 0.5 < z < 1.1, using the first ~ 55 000 redshifts from the VIMOS Public Extragalactic Redshift Survey (VIPERS). Methods: We measured the redshift-space two-point correlation functions (2PCF), ξ(s) and ξ(rp,π) , and the projected correlation function, wp(rp), in samples covering different ranges of B-band absolute magnitudes and stellar masses. We considered both threshold and binned galaxy samples, with median B-band absolute magnitudes - 21.6 ≲ MB - 5log (h) ≲ - 19.5 and median stellar masses 9.8 ≲ log (M⋆ [h-2 M⊙]) ≲ 10.7. We assessed the real-space clustering in the data from the projected correlation function, which we model as a power law in the range 0.2 < rp [h-1 Mpc ] < 20. Finally, we estimated the galaxy bias as a function of luminosity, stellar mass, and redshift, assuming a flat Λ cold dark matter model to derive the dark matter 2PCF. Results: We provide the best-fit parameters of the power-law model assumed for the real-space 2PCF - the correlation length, r0, and the slope, γ - as well as the linear bias parameter, as a function of the B-band absolute magnitude, stellar mass, and redshift. We confirm and provide the tightest constraints on the dependence of clustering on luminosity at 0.5 < z < 1.1. We prove the complexity of comparing the clustering dependence on stellar mass from samples that are originally flux-limited and discuss the possible origin of the observed discrepancies. Overall, our measurements provide stronger constraints on galaxy formation models, which are now required to match, in addition to local observations, the clustering evolution measured by VIPERS galaxies between z = 0.5 and z = 1.1 for a broad range of luminosities and stellar masses. Based on observations collected at the European Southern Observatory, Paranal, Chile, under programmes 182.A-0886 (LP) at the Very Large Telescope, and also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The VIPERS web site is http://vipers.inaf.it/

  2. Systematic approach to cutoff frequency selection in continuous-wave electron paramagnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Hirata, Hiroshi; Itoh, Toshiharu; Hosokawa, Kouichi; Deng, Yuanmu; Susaki, Hitoshi

    2005-08-01

    This article describes a systematic method for determining the cutoff frequency of the low-pass window function that is used for deconvolution in two-dimensional continuous-wave electron paramagnetic resonance (EPR) imaging. An evaluation function for the criterion used to select the cutoff frequency is proposed, and is the product of the effective width of the point spread function for a localized point signal and the noise amplitude of a resultant EPR image. The present method was applied to EPR imaging for a phantom, and the result of cutoff frequency selection was compared with that based on a previously reported method for the same projection data set. The evaluation function has a global minimum point that gives the appropriate cutoff frequency. Images with reasonably good resolution and noise suppression can be obtained from projections with an automatically selected cutoff frequency based on the present method.

  3. Holographic non-Fermi-liquid fixed points.

    PubMed

    Faulkner, Tom; Iqbal, Nabil; Liu, Hong; McGreevy, John; Vegh, David

    2011-04-28

    Techniques arising from string theory can be used to study assemblies of strongly interacting fermions. Via this 'holographic duality', various strongly coupled many-body systems are solved using an auxiliary theory of gravity. Simple holographic realizations of finite density exhibit single-particle spectral functions with sharp Fermi surfaces, of a form distinct from those of the Landau theory. The self-energy is given by a correlation function in an infrared (IR) fixed-point theory that is represented by a two-dimensional anti de Sitter space (AdS(2)) region in the dual gravitational description. Here, we describe in detail the gravity calculation of this IR correlation function.

  4. Physically motivated global alignment method for electron tomography

    DOE PAGES

    Sanders, Toby; Prange, Micah; Akatay, Cem; ...

    2015-04-08

    Electron tomography is widely used for nanoscale determination of 3-D structures in many areas of science. Determining the 3-D structure of a sample from electron tomography involves three major steps: acquisition of sequence of 2-D projection images of the sample with the electron microscope, alignment of the images to a common coordinate system, and 3-D reconstruction and segmentation of the sample from the aligned image data. The resolution of the 3-D reconstruction is directly influenced by the accuracy of the alignment, and therefore, it is crucial to have a robust and dependable alignment method. In this paper, we develop amore » new alignment method which avoids the use of markers and instead traces the computed paths of many identifiable ‘local’ center-of-mass points as the sample is rotated. Compared with traditional correlation schemes, the alignment method presented here is resistant to cumulative error observed from correlation techniques, has very rigorous mathematical justification, and is very robust since many points and paths are used, all of which inevitably improves the quality of the reconstruction and confidence in the scientific results.« less

  5. Experimental correlations for transient soot measurement in diesel exhaust aerosol with light extinction, electrical mobility and diffusion charger sensor techniques

    NASA Astrophysics Data System (ADS)

    Bermúdez, Vicente; Pastor, José V.; López, J. Javier; Campos, Daniel

    2014-06-01

    A study of soot measurement deviation using a diffusion charger sensor with three dilution ratios was conducted in order to obtain an optimum setting that can be used to obtain accurate measurements in terms of soot mass emitted by a light-duty diesel engine under transient operating conditions. The paper includes three experimental phases: an experimental validation of the measurement settings in steady-state operating conditions; evaluation of the proposed setting under the New European Driving Cycle; and a study of correlations for different measurement techniques. These correlations provide a reliable tool for estimating soot emission from light extinction measurement or from accumulation particle mode concentration. There are several methods and correlations to estimate soot concentration in the literature but most of them were assessed for steady-state operating points. In this case, the correlations are obtained by more than 4000 points measured in transient conditions. The results of the new two correlations, with less than 4% deviation from the reference measurement, are presented in this paper.

  6. An experimental study on the noise correlation properties of CBCT projection data

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ouyang, Luo; Ma, Jianhua; Huang, Jing; Chen, Wufan; Wang, Jing

    2014-03-01

    In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at three fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation results in a lower noise level as compared to the PWLS criterion without considering the noise correlation at the matched resolution.

  7. Correlation singularities in a partially coherent electromagnetic beam with initially radial polarization.

    PubMed

    Zhang, Yongtao; Cui, Yan; Wang, Fei; Cai, Yangjian

    2015-05-04

    We have investigated the correlation singularities, coherence vortices of two-point correlation function in a partially coherent vector beam with initially radial polarization, i.e., partially coherent radially polarized (PCRP) beam. It is found that these singularities generally occur during free space propagation. Analytical formulae for characterizing the dynamics of the correlation singularities on propagation are derived. The influence of the spatial coherence length of the beam on the evolution properties of the correlation singularities and the conditions for creation and annihilation of the correlation singularities during propagation have been studied in detail based on the derived formulae. Some interesting results are illustrated. These correlation singularities have implication for interference experiments with a PCRP beam.

  8. Figures of Merit for Aeronautics Programs and Addition to NASA LARC Fire Station

    NASA Technical Reports Server (NTRS)

    Harper, Belinda M.

    1995-01-01

    This report accounts details of two research projects for the Langley Aerospace Research Summer Scholars (LARSS) program. The first project, with the Office of Mission Assurance, involved subjectively predicting the probable success of two aeronautics programs by means of a tool called a Figure of Merit. The figure of merit bases program success on the quality and reliability of the following factors: parts, complexity of research, quality programs, hazards elimination, and single point failures elimination. The second project, for the Office of Safety and Facilities Assurance, required planning, layouts, and source seeking for an addition to the fire house. Forecasted changes in facility layout necessitate this addition which will serve as housing for the fire fighters.

  9. Why Comparing? Some Insights from a Comparison of Publications in Educational Journals in England and Germany

    ERIC Educational Resources Information Center

    Ertl, Hubert; Zierer, Klaus

    2017-01-01

    This paper discusses rationales for comparative work in education and draws on two projects on analysing publications in educational journals internationally. It uses the cases of Germany and England to illustrate the points made. The paper outlines some of the major developments in education in these two countries and identifies their…

  10. Comparison of forward- and back-projection in vivo EPID dosimetry for VMAT treatment of the prostate

    NASA Astrophysics Data System (ADS)

    Bedford, James L.; Hanson, Ian M.; Hansen, Vibeke N.

    2018-01-01

    In the forward-projection method of portal dosimetry for volumetric modulated arc therapy (VMAT), the integrated signal at the electronic portal imaging device (EPID) is predicted at the time of treatment planning, against which the measured integrated image is compared. In the back-projection method, the measured signal at each gantry angle is back-projected through the patient CT scan to give a measure of total dose to the patient. This study aims to investigate the practical agreement between the two types of EPID dosimetry for prostate radiotherapy. The AutoBeam treatment planning system produced VMAT plans together with corresponding predicted portal images, and a total of 46 sets of gantry-resolved portal images were acquired in 13 patients using an iViewGT portal imager. For the forward-projection method, each acquisition of gantry-resolved images was combined into a single integrated image and compared with the predicted image. For the back-projection method, iViewDose was used to calculate the dose distribution in the patient for comparison with the planned dose. A gamma index for 3% and 3 mm was used for both methods. The results were investigated by delivering the same plans to a phantom and repeating some of the deliveries with deliberately introduced errors. The strongest agreement between forward- and back-projection methods is seen in the isocentric intensity/dose difference, with moderate agreement in the mean gamma. The strongest correlation is observed within a given patient, with less correlation between patients, the latter representing the accuracy of prediction of the two methods. The error study shows that each of the two methods has its own distinct sensitivity to errors, but that overall the response is similar. The forward- and back-projection EPID dosimetry methods show moderate agreement in this series of prostate VMAT patients, indicating that both methods can contribute to the verification of dose delivered to the patient.

  11. Direct ophthalmoscopy on YouTube: analysis of instructional YouTube videos’ content and approach to visualization

    PubMed Central

    Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif

    2016-01-01

    Background Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. Methods In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman’s correlation. Results We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8–14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman’s ρ=0.53; P=0.029). Conclusion Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner’s view, and give particular emphasis on fundus examination. PMID:27574393

  12. Movement of the projected pedicles relative to the projected vertebral body in a fourth lumbar vertebra during axial rotation.

    PubMed

    Coleman, Roger R; Thomas, I Walker

    2004-01-01

    One use of the anteroposterior lumbar radiograph is to determine axial (y-axis) rotation of the lumbar vertebrae. Rotation might be an element of interest to clinicians seeking to evaluate vertebral positioning. Correlate and quantify movements of the projected pedicles relative to the projected vertebral body during axial rotation and determine if vertebral asymmetry and changes in object film distance affect these movements. A three-dimensional computer model of the fourth and fifth lumbar vertebrae, a modeled radiograph source, and a modeled film were produced. The vertebral model was placed in various degrees of axial rotation at a number of different object film distances. Lines from the source were passed through the pedicles of the fourth lumbar vertebral model and additional lines erected tangent to the lateral body margins. These lines were extended to points of contact with the modeled film. The projected pedicles move relative to the projected vertebral body during y-axis rotation. Vertebral asymmetry and object film distances can also affect the distance of the projected pedicle relative to the projected lateral body margin. Axial rotation produces movement of the projected pedicles relative to the projected vertebral body. However, vertebral asymmetry and changes in object film distance also affect the position of the projected pedicles relative to the projected lateral body margin and might serve as confounders to the clinician seeking to analyze vertebral rotation through the use of the projected pedicles.

  13. Investigation of Density Fluctuations in Supersonic Free Jets and Correlation with Generated Noise

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.

    2000-01-01

    The air density fluctuations in the plumes of fully-expanded, unheated free jets were investigated experimentally using a Rayleigh scattering based technique. The point measuring technique used a continuous wave laser, fiber-optic transmission and photon counting electronics. The radial and centerline profiles of time-averaged density and root-mean-square density fluctuation provided a comparative description of jet growth. To measure density fluctuation spectra a two-Photomultiplier tube technique was used. Crosscorrelation between the two PMT signals significantly reduced electronic shot noise contribution. Turbulent density fluctuations occurring up to a Strouhal number (Sr) of 2.5 were resolved. A remarkable feature of density spectra, obtained from the same locations of jets in 0.5< M<1.5 range, is a constant Strouhal frequency for peak fluctuations. A detailed survey at Mach numbers M = 0.95, 1.4 and 1.8 showed that, in general, distribution of various Strouhal frequency fluctuations remained similar for the three jets. In spite of the similarity in the flow fluctuation the noise characteristics were found to be significantly different. Spark schlieren photographs and near field microphone measurements confirmed that the eddy Mach wave radiation was present in Mach 1.8 jet, and was absent in Mach 0.95 jet. To measure correlation between the flow and the far field sound pressure fluctuations, a microphone was kept at a distance of 50 diameters, 30 deg. to the flow direction, and the laser probe volume was moved from point to point in the flow. The density fluctuations in the peripheral shear layer of Mach 1.8 jet showed significant correlation up to the measurement limit of Sr = 2.5, while for Mach 0.95 jet no correlation was measured. Along the centerline measurable correlation was found from the end of the potential core and at the low frequency range (Sr less than 0.5). Usually the normalized correlation values increased with an increase of the jet Mach number. The experimental data point out eddy Mach waves as a strong source of sound generation in supersonic jets and fail to locate the primary noise mechanism in subsonic jets.

  14. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  15. The Distribution of Interplanetary Dust between 0.96 and 1.04 au as Inferred from Impacts on the STEREO Spacecraft Observed by the Heliospheric Imagers

    NASA Technical Reports Server (NTRS)

    Davis, C. J.; Davis, J. A.; Meyer-Vernet, Nicole; Crothers, S.; Lintott, C.; Smith, A.; Bamford, S.; Baeten, E. M. L.; SaintCyr, O. C.; Campbell-Brown, M.; hide

    2012-01-01

    The distribution of dust in the ecliptic plane between 0.96 and 1.04 au has been inferred from impacts on the two Solar Terrestrial Relations Observatory (STEREO) spacecraft through observation of secondary particle trails and unexpected off-points in the heliospheric imager (HI) cameras. This study made use of analysis carried out by members of a distributed webbased citizen science project Solar Stormwatch. A comparison between observations of the brightest particle trails and a survey of fainter trails shows consistent distributions. While there is no obvious correlation between this distribution and the occurrence of individual meteor streams at Earth, there are some broad longitudinal features in these distributions that are also observed in sources of the sporadic meteor population. The different position of the HI instrument on the two STEREO spacecraft leads to each sampling different populations of dust particles. The asymmetry in the number of trails seen by each spacecraft and the fact that there are many more unexpected off-points in the HI-B than in HI-A indicates that the majority of impacts are coming from the apex direction. For impacts causing off-points in the HI-B camera, these dust particles are estimated to have masses in excess of 10 (exp-17) kg with radii exceeding 0.1 µm. For off-points observed in the HI-A images, which can only have been caused by particles travelling from the anti-apex direction, the distribution is consistent with that of secondary 'storm' trails observed by HI-B, providing evidence that these trails also result from impacts with primary particles from an anti-apex source. Investigating the mass distribution for the off-points of both HI-A and HI-B, it is apparent that the differential mass index of particles from the apex direction (causing off-points in HI-B) is consistently above 2. This indicates that the majority of the mass is within the smaller particles of this population. In contrast, the differential mass index of particles from the anti-apex direction (causing off-points in HI-A) is consistently below 2, indicating that the majority of the mass is to be found in larger particles of this distribution.

  16. Consistency relations for sharp inflationary non-Gaussian features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris

    If cosmic inflation suffered tiny time-dependent deviations from the slow-roll regime, these would induce the existence of small scale-dependent features imprinted in the primordial spectra, with their shapes and sizes revealing information about the physics that produced them. Small sharp features could be suppressed at the level of the two-point correlation function, making them undetectable in the power spectrum, but could be amplified at the level of the three-point correlation function, offering us a window of opportunity to uncover them in the non-Gaussian bispectrum. In this article, we show that sharp features may be analyzed using only data coming frommore » the three point correlation function parametrizing primordial non-Gaussianity. More precisely, we show that if features appear in a particular non-Gaussian triangle configuration (e.g. equilateral, folded, squeezed), these must reappear in every other configuration according to a specific relation allowing us to correlate features across the non-Gaussian bispectrum. As a result, we offer a method to study scale-dependent features generated during inflation that depends only on data coming from measurements of non-Gaussianity, allowing us to omit data from the power spectrum.« less

  17. Apparent violation of the sum rule for exchange-correlation charges by generalized gradient approximations.

    PubMed

    Kohut, Sviataslau V; Staroverov, Viktor N

    2013-10-28

    The exchange-correlation potential of Kohn-Sham density-functional theory, vXC(r), can be thought of as an electrostatic potential produced by the static charge distribution qXC(r) = -(1∕4π)∇(2)vXC(r). The total exchange-correlation charge, QXC = ∫qXC(r) dr, determines the rate of the asymptotic decay of vXC(r). If QXC ≠ 0, the potential falls off as QXC∕r; if QXC = 0, the decay is faster than coulombic. According to this rule, exchange-correlation potentials derived from standard generalized gradient approximations (GGAs) should have QXC = 0, but accurate numerical calculations give QXC ≠ 0. We resolve this paradox by showing that the charge density qXC(r) associated with every GGA consists of two types of contributions: a continuous distribution and point charges arising from the singularities of vXC(r) at each nucleus. Numerical integration of qXC(r) accounts for the continuous charge but misses the point charges. When the point-charge contributions are included, one obtains the correct QXC value. These findings provide an important caveat for attempts to devise asymptotically correct Kohn-Sham potentials by modeling the distribution qXC(r).

  18. Variations in the correlation between teleconnections and Taiwan's streamflow

    NASA Astrophysics Data System (ADS)

    Chen, Chia-Jeng; Lee, Tsung-Yu

    2017-07-01

    Interannual variations in catchment streamflow represent an integrated response to anomalies in regional moisture transport and atmospheric circulations and are ultimately linked to large-scale climate oscillations. This study conducts correlation analysis to calculate how summertime (July-September, JAS) streamflow data derived at 28 upstream and 13 downstream gauges in Taiwan correlate with 14 teleconnection indices in the current or preceding seasons. We find that the western Pacific (WP) and Pacific-Japan (PJ) patterns, both of which play a critical role in determining cyclonic activity in the western North Pacific basin, exhibit the highest concurrent correlations (most significant r = 0. 50) with the JAS flows in Taiwan. Alternatively, the Quasi-Biennial Oscillation (QBO) averaged over the period from the previous October to June of the current year is significantly correlated with the JAS flows (most significant r = -0. 66), indicating some forecasting utility. By further examining the correlation results using a 20-year moving window, peculiar temporal variations and possible climate regime shifts (CRSs) can be revealed. A CRS test is employed to identify suspicious and abrupt changes in the correlation. The late 1970s and 1990s are identified as two significant change points. During the intermediate period, Taiwan's streamflow and the PJ index exhibit a marked in-phase relationship (r > 0. 8). It is verified that the two shifts are in concordance with the alteration of large-scale circulations in the Pacific basin by investigating the changes in pattern correlation and composite maps before and after the change point. Our results suggest that empirical forecasting techniques should take into account the effect of CRSs on predictor screening.

  19. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  20. Why some employees adopt or resist reorganization of work practices in health care: associations between perceived loss of resources, burnout, and attitudes to change.

    PubMed

    Dubois, Carl-Ardy; Bentein, Kathleen; Mansour, Jamal Ben; Gilbert, Frédéric; Bédard, Jean-Luc

    2013-12-20

    In recent years, successive work reorganization initiatives have been implemented in many healthcare settings. The failure of many of these change efforts has often been attributed in the prominent management discourse to change resistance. Few studies have paid attention to the temporal process of workers' resource depletion/accumulation over time and its links with workers' psychological states and reactions to change. Drawing upon the conservation of resources theory, this study examines associations between workers' perceptions of loss of resources, burnout, and attitudes to change. The study was conducted in five health and social service centres in Quebec, in units where a work reorganization project was initiated. A prospective longitudinal design was used to assess workers' perceptions at two time points 12 months apart. Our findings are consistent with the conservation of resources theory. The analysis of latent differences scores between times 1 and 2 showed that the perceived loss of resources was associated with emotional exhaustion, which, in turn, was negatively correlated with commitment to change and positively correlated with cynicism. In confirming the temporal relationship between perceived loss of resources, occupational burnout, and attitude to change, this research offers a new perspective to explain negative and positive reactions to change implementation.

  1. Why Some Employees Adopt or Resist Reorganization of Work Practices in Health Care: Associations between Perceived Loss of Resources, Burnout, and Attitudes to Change

    PubMed Central

    Dubois, Carl-Ardy; Bentein, Kathleen; Ben Mansour, Jamal; Gilbert, Frédéric; Bédard, Jean-Luc

    2013-01-01

    In recent years, successive work reorganization initiatives have been implemented in many healthcare settings. The failure of many of these change efforts has often been attributed in the prominent management discourse to change resistance. Few studies have paid attention to the temporal process of workers’ resource depletion/accumulation over time and its links with workers’ psychological states and reactions to change. Drawing upon the conservation of resources theory, this study examines associations between workers’ perceptions of loss of resources, burnout, and attitudes to change. The study was conducted in five health and social service centres in Quebec, in units where a work reorganization project was initiated. A prospective longitudinal design was used to assess workers’ perceptions at two time points 12 months apart. Our findings are consistent with the conservation of resources theory. The analysis of latent differences scores between times 1 and 2 showed that the perceived loss of resources was associated with emotional exhaustion, which, in turn, was negatively correlated with commitment to change and positively correlated with cynicism. In confirming the temporal relationship between perceived loss of resources, occupational burnout, and attitude to change, this research offers a new perspective to explain negative and positive reactions to change implementation. PMID:24362547

  2. Circumventing Imprecise Geometric Information and Development of a Unified Modeling Technique for Various Flow Regimes in Capillary Tubes

    NASA Astrophysics Data System (ADS)

    Abbasi, Bahman

    2012-11-01

    Owing to their manufacturability and reliability, capillary tubes are the most common expansion devices in household refrigerators. Therefore, investigating flow properties in the capillary tubes is of immense appeal in the said business. The models to predict pressure drop in two-phase internal flows invariably rely upon highly precise geometric information. The manner in which capillary tubes are manufactured makes them highly susceptible to geometric imprecisions, which renders geometry-based models unreliable to the point of obsoleteness. Aware of the issue, manufacturers categorize capillary tubes based on Nitrogen flow rate through them. This categorization method presents an opportunity to substitute geometric details with Nitrogen flow data as the basis for customized models. The simulation tools developed by implementation of this technique have the singular advantage of being applicable across flow regimes. Thus the error-prone process of identifying compatible correlations is eliminated. Equally importantly, compressibility and chocking effects can be incorporated in the same model. The outcome is a standalone correlation that provides accurate predictions, regardless of any particular fluid or flow regime. Thereby, exploratory investigations for capillary tube design and optimization are greatly simplified. Bahman Abbasi, Ph.D., is Lead Advanced Systems Engineer at General Electric Appliances in Louisville, KY. He conducts research projects across disciplines in the household refrigeration industry.

  3. Quantum coherence of planar spin models with Dzyaloshinsky-Moriya interaction

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Chandrashekar; Ermakov, Igor; Byrnes, Tim

    2017-07-01

    The quantum coherence of one-dimensional planar spin models with Dzyaloshinsky-Moriya interaction is investigated. The anisotropic XY model, the isotropic XX model, and the transverse field model are studied in the large N limit using two qubit reduced density matrices and two point correlation functions. From our investigations we find that the coherence as measured using Jensen-Shannon divergence can be used to detect quantum phase transitions and quantum critical points. The derivative of coherence shows nonanalytic behavior at critical points, leading to the conclusion that these transitions are of second order. Further, we show that the presence of Dzyaloshinsky-Moriya coupling suppresses the phase transition due to residual ferromagnetism, which is caused by spin canting.

  4. "Keep your eyes on the prize": reference points and racial differences in assessing progress toward equality.

    PubMed

    Eibach, Richard P; Ehrlinger, Joyce

    2006-01-01

    White Americans tend to perceive greater progress toward racial equality than do ethnic minorities. Correlational evidence (Study 1) and two experimental manipulations of framing (Studies 2 and 3) supported the hypothesis that this perception gap is associated with different reference points the two groups spontaneously use to assess progress, with Whites anchoring on comparisons with the past and ethnic minorities anchoring on ideal standards. Consistent with the hypothesis that the groups anchor on different reference points, the gap in perceptions of progress was affected by the time participants spent deliberating about the topic (Study 4). Implications for survey methods and political conflict are discussed.

  5. The "Learning for Leadership" Project: Education That Makes a Difference. Final Evaluation. A Project Involving Middle Schools in the Upper Arlington, Ohio and Worthington, Ohio School Districts.

    ERIC Educational Resources Information Center

    Bradley, L. Richard

    Recent national studies have pointed out the changing educational needs of young people as the United States moves from an industrial society to an information society. Selected middle school students in Ohio were involved in a two-year federally-funded program entitled "Learning for Leadership." The objectives of the program were: (1)…

  6. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  7. Modeling clustered activity increase in amyloid-beta positron emission tomographic images with statistical descriptors.

    PubMed

    Shokouhi, Sepideh; Rogers, Baxter P; Kang, Hakmook; Ding, Zhaohua; Claassen, Daniel O; Mckay, John W; Riddle, William R

    2015-01-01

    Amyloid-beta (Aβ) imaging with positron emission tomography (PET) holds promise for detecting the presence of Aβ plaques in the cortical gray matter. Many image analyses focus on regional average measurements of tracer activity distribution; however, considerable additional information is available in the images. Metrics that describe the statistical properties of images, such as the two-point correlation function (S2), have found wide applications in astronomy and materials science. S2 provides a detailed characterization of spatial patterns in images typically referred to as clustering or flocculence. The objective of this study was to translate the two-point correlation method into Aβ-PET of the human brain using 11C-Pittsburgh compound B (11C-PiB) to characterize longitudinal changes in the tracer distribution that may reflect changes in Aβ plaque accumulation. We modified the conventional S2 metric, which is primarily used for binary images and formulated a weighted two-point correlation function (wS2) to describe nonbinary, real-valued PET images with a single statistical function. Using serial 11C-PiB scans, we calculated wS2 functions from two-dimensional PET images of different cortical regions as well as three-dimensional data from the whole brain. The area under the wS2 functions was calculated and compared with the mean/median of the standardized uptake value ratio (SUVR). For three-dimensional data, we compared the area under the wS2 curves with the subjects' cerebrospinal fluid measures. Overall, the longitudinal changes in wS2 correlated with the increase in mean SUVR but showed lower variance. The whole brain results showed a higher inverse correlation between the cerebrospinal Aβ and wS2 than between the cerebrospinal Aβ and SUVR mean/median. We did not observe any confounding of wS2 by region size or injected dose. The wS2 detects subtle changes and provides additional information about the binding characteristics of radiotracers and Aβ accumulation that are difficult to verify with mean SUVR alone.

  8. Spectrum of classes of point emitters of electromagnetic wave fields.

    PubMed

    Castañeda, Román

    2016-09-01

    The spectrum of classes of point emitters has been introduced as a numerical tool suitable for the design, analysis, and synthesis of non-paraxial optical fields in arbitrary states of spatial coherence. In this paper, the polarization state of planar electromagnetic wave fields is included in the spectrum of classes, thus increasing its modeling capabilities. In this context, optical processing is realized as a filtering on the spectrum of classes of point emitters, performed by the complex degree of spatial coherence and the two-point correlation of polarization, which could be implemented dynamically by using programmable optical devices.

  9. Low-frequency radio constraints on the synchrotron cosmic web

    NASA Astrophysics Data System (ADS)

    Vernstrom, T.; Gaensler, B. M.; Brown, S.; Lenc, E.; Norris, R. P.

    2017-06-01

    We present a search for the synchrotron emission from the synchrotron cosmic web by cross-correlating 180-MHz radio images from the Murchison Widefield Array with tracers of large-scale structure (LSS). We use two versions of the radio image covering 21.76° × 21.76° with point sources brighter than 0.05 Jy subtracted, with and without filtering of Galactic emission. As tracers of the LSS, we use the Two Micron All-Sky Survey and the Wide-field InfraRed Explorer redshift catalogues to produce galaxy number density maps. The cross-correlation functions all show peak amplitudes at 0°, decreasing with varying slopes towards zero correlation over a range of 1°. The cross-correlation signals include components from point source, Galactic, and extragalactic diffuse emission. We use models of the diffuse emission from smoothing the density maps with Gaussians of sizes 1-4 Mpc to find limits on the cosmic web components. From these models, we find surface brightness 99.7 per cent upper limits in the range of 0.09-2.20 mJy beam-1 (average beam size of 2.6 arcmin), corresponding to 0.01-0.30 mJy arcmin-2. Assuming equipartition between energy densities of cosmic rays and the magnetic field, the flux density limits translate to magnetic field strength limits of 0.03-1.98 μG, depending heavily on the spectral index. We conclude that for a 3σ detection of 0.1 μG magnetic field strengths via cross-correlations, image depths of sub-mJy to sub-μJy are necessary. We include discussion on the treatment and effect of extragalactic point sources and Galactic emission, and next steps for building on this work.

  10. Reality check in the project management of EU funding

    NASA Astrophysics Data System (ADS)

    Guo, Chenbo

    2015-04-01

    A talk addressing workload, focuses, impacts and outcomes of project management (hereinafter PM) Two FP7 projects serve as objects for investigation. In the Earth Science sector NACLIM is a large scale collaborative project with 18 partners from North and West Europe. NACLIM aims at investigating and quantifying the predictability of the North Atlantic/Arctic sea surface temperature, sea ice variability and change on seasonal to decadal time scales which have a crucial impact on weather and climate in Europe. PRIMO from Political Science is a global PhD program funded by Marie Curie ITN instrument with 11 partners from Europe, Eurasia and BRICS countries focusing on the rise of regional powers and its impact on international politics at large. Although the two projects are granted by different FP7 funding instruments, stem from different cultural backgrounds and have different goals, the inherent processes and the key focus of the PM are quite alike. Only the operational management is at some point distinguished from one another. From the administrative point of view, understanding of both EU requirements and the country-specific regulations is essential; it also helps us identifying the grey area in order to carry out the projects more efficiently. The talk will focus on our observation of the day-to-day PM flows - primarily the project implementation - with few particular cases: transparency issues, e.g. priority settings of non-research stakeholders including the conflict in the human resources field, End-User integration, gender issues rising up during a monitoring visit and ethical aspects in field research. Through a brief comparison of both projects we summarize a range of dos and don'ts, an "acting instead of reacting" line of action, and the conclusion to a systematic overall management instead of exclusively project controlling. In a nutshell , the talk aims at providing the audience a summary of the observation in management methodologies and toolkits applied in both projects, our best practices and lessons learnt in coordinating large international consortia.

  11. SU-C-BRD-04: Comparison of Shallow Fluence to Deep Point Dose Measurements for Spine VMAT SBRT Patient-Specific QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, J; Held, M; Morin, O

    2015-06-15

    Purpose: To investigate the sensitivity of traditional gamma-index-based fluence measurements for patient-specific measurements in VMAT delivered spine SBRT. Methods: The ten most recent cases for spine SBRT were selected. All cases were planned with Eclipse RapidArc for a TrueBeam STx. The delivery was verified using a point dose measurement with a Pinpoint 3D micro-ion chamber in a Standard Imaging Stereotactic Dose Verification Phantom. Two points were selected for each case, one within the target in a low dose-gradient region and one in the spinal cord. Measurements were localized using on-board CBCT. Cumulative and separate arc measurements were acquired with themore » ArcCheck and assessed using the SNC patient software with a 3%/3mm and 2%/2mm gamma analysis with global normalization and a 10% dose threshold. Correlations between data were determined using the Pearson Product-Moment Correlation. Results: For our cohort of patients, the measured doses were higher than calculated ranging from 2.2%–9.7% for the target and 1.0%–8.2% for the spinal cord. There was strong correlation between 3%/3mm and 2%/2mm passing rates (r=0.91). Moderate correlation was found between target and cord dose with a weak fit (r=0.67, R-Square=0.45). The cumulative ArcCheck measurements showed poor correlation with the measured point doses for both the target and cord (r=0.20, r=0.35). If the arcs are assessed separately with an acceptance criteria applied to the minimum passing rate between all arcs, a moderate negative correlation was found for the target and cord (r=−0.48, r= −0.71). The case with the highest dose difference (9.7%) received a passing rate of 97.2% for the cumulative arcs and 87.8% for the minimum with separate arcs. Conclusion: Our data suggest that traditional passing criteria using ArcCheck with cumulative measurements do not correlate well with dose errors. Separate arc analysis shows better correlation but may still miss large dose errors. Point dose verifications are recommended.« less

  12. Buhne Point Shoreline Erosion Demonstration Project. Volume 1. Appendices A-D.

    DTIC Science & Technology

    1987-08-01

    for repairing damage to highways and preventing damage to highways resulting from shoreline erosion." A four- year , four-phase program was implemented...program included experimental collecting and growing of 20 different native and naturalized species for a two- year period, and then extensive...King Salmon forming a bay side boundary between the shoal area and King Salmon. Over the past decade, Buhne Spit shoal has eroded to the point where

  13. Disruption of the lower food web in Lake Ontario: Did it affect alewife growth or condition?

    USGS Publications Warehouse

    O'Gorman, R.; Prindle, S.E.; Lantry, J.R.; Lantry, B.F.

    2008-01-01

    From the early 1980s to the late 1990s, a succession of non-native invertebrates colonized Lake Ontario and the suite of consequences caused by their colonization became known as "food web disruption". For example, the native burrowing amphipod Diporeia spp., a key link in the profundal food web, declined to near absence, exotic predaceous cladocerans with long spines proliferated, altering the zooplankton community, and depth distributions of fishes shifted. These changes had the potential to affect growth and condition of planktivorous alewife Alosa pseudoharengus, the most abundant fish in the lake. To determine if food web disruption affected alewife, we used change-point analysis to examine alewife growth and adult alewife condition during 1976-2006 and analysis-of-variance to determine if values between change points differed significantly. There were no change points in growth during the first year of life. Of three change points in growth during the second year of life, one coincided with the shift in springtime distribution of alewife to deeper water but it was not associated with a significant change in growth. After the second year of life, no change points in growth were evident, although growth in the third year of life spiked in those years when Bythotrephes, the largest of the exotic cladocerans, was abundant suggesting that it was a profitable prey item for age-2 fish. We detected two change points in condition of adult alewife in fall, but the first occurred in 1981, well before disruption began. A second change point occurred in 2003, well after disruption began. After the springtime distribution of alewife shifted deeper during 1992-1994, growth in the first two years of life became more variable, and growth in years of life two and older became correlated (P < 0.05). In conclusion, food web disruption had no negative affect on growth and condition of alewife in Lake Ontario although it appears to have resulted in growth in the first two years of life becoming more variable, growth in years of life two and older becoming correlated (P < 0.05), and growth spurts in year of life three. Copyright ?? 2008 AEHMS.

  14. 77 FR 47628 - Archon Energy 1, Inc.; Notice of Preliminary Permit Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ...), proposing to study the feasibility of the DaGuerre Point Dam Hydropower Project (DaGuerre Point Dam Project or project) to be located at the U.S. Army Corps of Engineers' (USACE) DaGuerre Point Dam, on the...

  15. TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oines, A; Oines, A; Kilian-Meneghin, J

    2016-06-15

    Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphologymore » from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  16. Integration of fringe projection and two-dimensional digital image correlation for three-dimensional displacements measurements

    NASA Astrophysics Data System (ADS)

    Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.

    2016-12-01

    A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.

  17. Accurate double many-body expansion potential energy surface of HS2A2A‧) by scaling the external correlation

    NASA Astrophysics Data System (ADS)

    Lu-Lu, Zhang; Yu-Zhi, Song; Shou-Bao, Gao; Yuan, Zhang; Qing-Tian, Meng

    2016-05-01

    A globally accurate single-sheeted double many-body expansion potential energy surface is reported for the first excited state of HS2 by fitting the accurate ab initio energies, which are calculated at the multireference configuration interaction level with the aug-cc-pVQZ basis set. By using the double many-body expansion-scaled external correlation method, such calculated ab initio energies are then slightly corrected by scaling their dynamical correlation. A grid of 2767 ab initio energies is used in the least-square fitting procedure with the total root-mean square deviation being 1.406 kcal·mol-1. The topographical features of the HS2(A2A‧) global potential energy surface are examined in detail. The attributes of the stationary points are presented and compared with the corresponding ab initio results as well as experimental and other theoretical data, showing good agreement. The resulting potential energy surface of HS2(A2A‧) can be used as a building block for constructing the global potential energy surfaces of larger S/H molecular systems and recommended for dynamic studies on the title molecular system. Project supported by the National Natural Science Foundation of China (Grant No. 11304185), the Taishan Scholar Project of Shandong Province, China, the Shandong Provincial Natural Science Foundation, China (Grant No. ZR2014AM022), the Shandong Province Higher Educational Science and Technology Program, China (Grant No. J15LJ03), the China Postdoctoral Science Foundation (Grant No. 2014M561957), and the Post-doctoral Innovation Project of Shandong Province, China (Grant No. 201402013).

  18. The Physical Significance of the Synthetic Running Correlation Coefficient and Its Applications in Oceanic and Atmospheric Studies

    NASA Astrophysics Data System (ADS)

    Zhao, Jinping; Cao, Yong; Wang, Xin

    2018-06-01

    In order to study the temporal variations of correlations between two time series, a running correlation coefficient (RCC) could be used. An RCC is calculated for a given time window, and the window is then moved sequentially through time. The current calculation method for RCCs is based on the general definition of the Pearson product-moment correlation coefficient, calculated with the data within the time window, which we call the local running correlation coefficient (LRCC). The LRCC is calculated via the two anomalies corresponding to the two local means, meanwhile, the local means also vary. It is cleared up that the LRCC reflects only the correlation between the two anomalies within the time window but fails to exhibit the contributions of the two varying means. To address this problem, two unchanged means obtained from all available data are adopted to calculate an RCC, which is called the synthetic running correlation coefficient (SRCC). When the anomaly variations are dominant, the two RCCs are similar. However, when the variations of the means are dominant, the difference between the two RCCs becomes obvious. The SRCC reflects the correlations of both the anomaly variations and the variations of the means. Therefore, the SRCCs from different time points are intercomparable. A criterion for the superiority of the RCC algorithm is that the average value of the RCC should be close to the global correlation coefficient calculated using all data. The SRCC always meets this criterion, while the LRCC sometimes fails. Therefore, the SRCC is better than the LRCC for running correlations. We suggest using the SRCC to calculate the RCCs.

  19. Multivariate Meta-Analysis of Genetic Association Studies: A Simulation Study

    PubMed Central

    Neupane, Binod; Beyene, Joseph

    2015-01-01

    In a meta-analysis with multiple end points of interests that are correlated between or within studies, multivariate approach to meta-analysis has a potential to produce more precise estimates of effects by exploiting the correlation structure between end points. However, under random-effects assumption the multivariate estimation is more complex (as it involves estimation of more parameters simultaneously) than univariate estimation, and sometimes can produce unrealistic parameter estimates. Usefulness of multivariate approach to meta-analysis of the effects of a genetic variant on two or more correlated traits is not well understood in the area of genetic association studies. In such studies, genetic variants are expected to roughly maintain Hardy-Weinberg equilibrium within studies, and also their effects on complex traits are generally very small to modest and could be heterogeneous across studies for genuine reasons. We carried out extensive simulation to explore the comparative performance of multivariate approach with most commonly used univariate inverse-variance weighted approach under random-effects assumption in various realistic meta-analytic scenarios of genetic association studies of correlated end points. We evaluated the performance with respect to relative mean bias percentage, and root mean square error (RMSE) of the estimate and coverage probability of corresponding 95% confidence interval of the effect for each end point. Our simulation results suggest that multivariate approach performs similarly or better than univariate method when correlations between end points within or between studies are at least moderate and between-study variation is similar or larger than average within-study variation for meta-analyses of 10 or more genetic studies. Multivariate approach produces estimates with smaller bias and RMSE especially for the end point that has randomly or informatively missing summary data in some individual studies, when the missing data in the endpoint are imputed with null effects and quite large variance. PMID:26196398

  20. Multivariate Meta-Analysis of Genetic Association Studies: A Simulation Study.

    PubMed

    Neupane, Binod; Beyene, Joseph

    2015-01-01

    In a meta-analysis with multiple end points of interests that are correlated between or within studies, multivariate approach to meta-analysis has a potential to produce more precise estimates of effects by exploiting the correlation structure between end points. However, under random-effects assumption the multivariate estimation is more complex (as it involves estimation of more parameters simultaneously) than univariate estimation, and sometimes can produce unrealistic parameter estimates. Usefulness of multivariate approach to meta-analysis of the effects of a genetic variant on two or more correlated traits is not well understood in the area of genetic association studies. In such studies, genetic variants are expected to roughly maintain Hardy-Weinberg equilibrium within studies, and also their effects on complex traits are generally very small to modest and could be heterogeneous across studies for genuine reasons. We carried out extensive simulation to explore the comparative performance of multivariate approach with most commonly used univariate inverse-variance weighted approach under random-effects assumption in various realistic meta-analytic scenarios of genetic association studies of correlated end points. We evaluated the performance with respect to relative mean bias percentage, and root mean square error (RMSE) of the estimate and coverage probability of corresponding 95% confidence interval of the effect for each end point. Our simulation results suggest that multivariate approach performs similarly or better than univariate method when correlations between end points within or between studies are at least moderate and between-study variation is similar or larger than average within-study variation for meta-analyses of 10 or more genetic studies. Multivariate approach produces estimates with smaller bias and RMSE especially for the end point that has randomly or informatively missing summary data in some individual studies, when the missing data in the endpoint are imputed with null effects and quite large variance.

  1. Quantitative Analysis of Cotton Canopy Size in Field Conditions Using a Consumer-Grade RGB-D Camera.

    PubMed

    Jiang, Yu; Li, Changying; Paterson, Andrew H; Sun, Shangpeng; Xu, Rui; Robertson, Jon

    2017-01-01

    Plant canopy structure can strongly affect crop functions such as yield and stress tolerance, and canopy size is an important aspect of canopy structure. Manual assessment of canopy size is laborious and imprecise, and cannot measure multi-dimensional traits such as projected leaf area and canopy volume. Field-based high throughput phenotyping systems with imaging capabilities can rapidly acquire data about plants in field conditions, making it possible to quantify and monitor plant canopy development. The goal of this study was to develop a 3D imaging approach to quantitatively analyze cotton canopy development in field conditions. A cotton field was planted with 128 plots, including four genotypes of 32 plots each. The field was scanned by GPhenoVision (a customized field-based high throughput phenotyping system) to acquire color and depth images with GPS information in 2016 covering two growth stages: canopy development, and flowering and boll development. A data processing pipeline was developed, consisting of three steps: plot point cloud reconstruction, plant canopy segmentation, and trait extraction. Plot point clouds were reconstructed using color and depth images with GPS information. In colorized point clouds, vegetation was segmented from the background using an excess-green (ExG) color filter, and cotton canopies were further separated from weeds based on height, size, and position information. Static morphological traits were extracted on each day, including univariate traits (maximum and mean canopy height and width, projected canopy area, and concave and convex volumes) and a multivariate trait (cumulative height profile). Growth rates were calculated for univariate static traits, quantifying canopy growth and development. Linear regressions were performed between the traits and fiber yield to identify the best traits and measurement time for yield prediction. The results showed that fiber yield was correlated with static traits after the canopy development stage ( R 2 = 0.35-0.71) and growth rates in early canopy development stages ( R 2 = 0.29-0.52). Multi-dimensional traits (e.g., projected canopy area and volume) outperformed one-dimensional traits, and the multivariate trait (cumulative height profile) outperformed univariate traits. The proposed approach would be useful for identification of quantitative trait loci (QTLs) controlling canopy size in genetics/genomics studies or for fiber yield prediction in breeding programs and production environments.

  2. Top/bottom multisensor remote sensing of Arctic sea ice

    NASA Technical Reports Server (NTRS)

    Comiso, J. C.; Wadhams, P.; Krabill, W. B.; Swift, R. N.; Crawford, J. P.

    1991-01-01

    Results are presented on the Aircraft/Submarine Sea Ice Project experiment carried out in May 1987 to investigate concurrently the top and the bottom features of the Arctic sea-ice cover. Data were collected nearly simultaneously by instruments aboard two aircraft and a submarine, which included passive and active (SAR) microwave sensors, upward looking and sidescan sonars, a lidar profilometer, and an IR sensor. The results described fall into two classes of correlations: (1) quantitative correlations between profiles, such as ice draft (sonar), ice elevation (laser), SAR backscatter along the track line, and passive microwave brightness temperatures; and (2) qualitative and semiquantitative correlations between corresponding areas of imagery (i.e., passive microwave, AR, and sidescan sonar).

  3. Model-specification uncertainty in future forest pest outbreak.

    PubMed

    Boulanger, Yan; Gray, David R; Cooke, Barry J; De Grandpré, Louis

    2016-04-01

    Climate change will modify forest pest outbreak characteristics, although there are disagreements regarding the specifics of these changes. A large part of this variability may be attributed to model specifications. As a case study, we developed a consensus model predicting spruce budworm (SBW, Choristoneura fumiferana [Clem.]) outbreak duration using two different predictor data sets and six different correlative methods. The model was used to project outbreak duration and the uncertainty associated with using different data sets and correlative methods (=model-specification uncertainty) for 2011-2040, 2041-2070 and 2071-2100, according to three forcing scenarios (RCP 2.6, RCP 4.5 and RCP 8.5). The consensus model showed very high explanatory power and low bias. The model projected a more important northward shift and decrease in outbreak duration under the RCP 8.5 scenario. However, variation in single-model projections increases with time, making future projections highly uncertain. Notably, the magnitude of the shifts in northward expansion, overall outbreak duration and the patterns of outbreaks duration at the southern edge were highly variable according to the predictor data set and correlative method used. We also demonstrated that variation in forcing scenarios contributed only slightly to the uncertainty of model projections compared with the two sources of model-specification uncertainty. Our approach helped to quantify model-specification uncertainty in future forest pest outbreak characteristics. It may contribute to sounder decision-making by acknowledging the limits of the projections and help to identify areas where model-specification uncertainty is high. As such, we further stress that this uncertainty should be strongly considered when making forest management plans, notably by adopting adaptive management strategies so as to reduce future risks. © 2015 Her Majesty the Queen in Right of Canada Global Change Biology © 2015 Published by John Wiley & Sons Ltd Reproduced with the permission of the Minister of Natural Resources Canada.

  4. Electron correlation within the relativistic no-pair approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almoukhalalati, Adel; Saue, Trond, E-mail: trond.saue@irsamc.ups-tlse.fr; Knecht, Stefan

    This paper addresses the definition of correlation energy within 4-component relativistic atomic and molecular calculations. In the nonrelativistic domain the correlation energy is defined as the difference between the exact eigenvalue of the electronic Hamiltonian and the Hartree-Fock energy. In practice, what is reported is the basis set correlation energy, where the “exact” value is provided by a full Configuration Interaction (CI) calculation with some specified one-particle basis. The extension of this definition to the relativistic domain is not straightforward since the corresponding electronic Hamiltonian, the Dirac-Coulomb Hamiltonian, has no bound solutions. Present-day relativistic calculations are carried out within themore » no-pair approximation, where the Dirac-Coulomb Hamiltonian is embedded by projectors eliminating the troublesome negative-energy solutions. Hartree-Fock calculations are carried out with the implicit use of such projectors and only positive-energy orbitals are retained at the correlated level, meaning that the Hartree-Fock projectors are frozen at the correlated level. We argue that the projection operators should be optimized also at the correlated level and that this is possible by full Multiconfigurational Self-Consistent Field (MCSCF) calculations, that is, MCSCF calculations using a no-pair full CI expansion, but including orbital relaxation from the negative-energy orbitals. We show by variational perturbation theory that the MCSCF correlation energy is a pure MP2-like correlation expression, whereas the corresponding CI correlation energy contains an additional relaxation term. We explore numerically our theoretical analysis by carrying out variational and perturbative calculations on the two-electron rare gas atoms with specially tailored basis sets. In particular, we show that the correlation energy obtained by the suggested MCSCF procedure is smaller than the no-pair full CI correlation energy, in accordance with the underlying minmax principle and our theoretical analysis. We also show that the relativistic correlation energy, obtained from no-pair full MCSCF calculations, scales at worst as X{sup −2} with respect to the cardinal number X of our correlation-consistent basis sets optimized for the two-electron atoms. This is better than the X{sup −1} scaling suggested by previous studies, but worse than the X{sup −3} scaling observed in the nonrelativistic domain. The well-known 1/Z- expansion in nonrelativistic atomic theory follows from coordinate scaling. We point out that coordinate scaling for consistency should be accompanied by velocity scaling. In the nonrelativistic domain this comes about automatically, whereas in the relativistic domain an explicit scaling of the speed of light is required. This in turn explains why the relativistic correlation energy to the lowest order is not independent of nuclear charge, in contrast to nonrelativistic theory.« less

  5. The CHESS score: a simple tool for early prediction of shunt dependency after aneurysmal subarachnoid hemorrhage.

    PubMed

    Jabbarli, R; Bohrer, A-M; Pierscianek, D; Müller, D; Wrede, K H; Dammann, P; El Hindy, N; Özkan, N; Sure, U; Müller, O

    2016-05-01

    Acute hydrocephalus is an early and common complication of aneurysmal subarachnoid hemorrhage (SAH). However, considerably fewer patients develop chronic hydrocephalus requiring shunt placement. Our aim was to develop a risk score for early identification of patients with shunt dependency after SAH. Two hundred and forty-two SAH individuals who were treated in our institution between January 2008 and December 2013 and survived the initial impact were retrospectively analyzed. Clinical parameters within 72 h after the ictus were correlated with shunt dependency. Independent predictors were summarized into a new risk score which was validated in a subsequent SAH cohort treated between January and December 2014. Seventy-five patients (31%) underwent shunt placement. Of 23 evaluated variables, only the following five showed independent associations with shunt dependency and were subsequently used to establish the Chronic Hydrocephalus Ensuing from SAH Score (CHESS, 0-8 points): Hunt and Hess grade ≥IV (1 point), location of the ruptured aneurysm in the posterior circulation (1 point), acute hydrocephalus (4 points), the presence of intraventricular hemorrhage (1 point) and early cerebral infarction on follow-up computed tomography scan (1 point). The CHESS showed strong correlation with shunt dependency (P = 0.0007) and could be successfully validated in both internal SAH cohorts tested. Patients scoring ≥6 CHESS points had significantly higher risk of shunt dependency (P < 0.0001) than other patients. The CHESS may become a valuable diagnostic tool for early estimation of shunt dependency after SAH. Further evaluation and external validation will be required in prospective studies. © 2016 EAN.

  6. Student goal orientation in learning inquiry skills with modifiable software advisors

    NASA Astrophysics Data System (ADS)

    Shimoda, Todd A.; White, Barbara Y.; Frederiksen, John R.

    2002-03-01

    A computer support environment (SCI-WISE) for learning and doing science inquiry projects was designed. SCI-WISE incorporates software advisors that give general advice about a skill such as hypothesizing. By giving general advice (rather than step-by-step procedures), the system is intended to help students conduct experiments that are more epistemologically authentic. Also, students using SCI-WISE can select the type of advice the advisors give and when they give advice, as well as modify the advisors' knowledge bases. The system is based partly on a theoretical framework of levels of agency and goal orientation. This framework assumes that giving students higher levels of agency facilitates higher-level goal orientations (such as mastery or knowledge building as opposed to task completion) that in turn produce higher levels of competence. A study of sixth grade science students was conducted. Students took a pretest questionnaire that measured their goal orientations for science projects and their inquiry skills. The students worked in pairs on an open-ended inquiry project that requires complex reasoning about human memory. The students used one of two versions of SCI-WISE - one that was modifiable and one that was not. After finishing the project, the students took a posttest questionnaire similar to the pretest, and evaluated the version of the system they used. The main results showed that (a) there was no correlation of goal orientation with grade point average, (b) knowledge-oriented students using the modifiable version tended to rate SCI-WISE more helpful than task-oriented students, and (c) knowledge-oriented pairs using the nonmodifiable version tended to have higher posttest inquiry skills scores than other pair types.

  7. SLAC Linac Preparations for FACET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, R.; Bentson, L.; Kharakh, D.

    The SLAC 3km linear electron accelerator has been cut at the two-thirds point to provide beams to two independent programs. The last third provides the electron beam for the Linac Coherent Light Source (LCLS), leaving the first two-thirds available for FACET, the new experimental facility for accelerator science and test beams. In this paper, we describe this separation and projects to prepare the linac for the FACET experimental program.

  8. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  9. Automated two-point dixon screening for the evaluation of hepatic steatosis and siderosis: comparison with R2-relaxometry and chemical shift-based sequences.

    PubMed

    Henninger, B; Zoller, H; Rauch, S; Schocke, M; Kannengiesser, S; Zhong, X; Reiter, G; Jaschke, W; Kremser, C

    2015-05-01

    To evaluate the automated two-point Dixon screening sequence for the detection and estimated quantification of hepatic iron and fat compared with standard sequences as a reference. One hundred and two patients with suspected diffuse liver disease were included in this prospective study. The following MRI protocol was used: 3D-T1-weighted opposed- and in-phase gradient echo with two-point Dixon reconstruction and dual-ratio signal discrimination algorithm ("screening" sequence); fat-saturated, multi-gradient-echo sequence with 12 echoes; gradient-echo T1 FLASH opposed- and in-phase. Bland-Altman plots were generated and correlation coefficients were calculated to compare the sequences. The screening sequence diagnosed fat in 33, iron in 35 and a combination of both in 4 patients. Correlation between R2* values of the screening sequence and the standard relaxometry was excellent (r = 0.988). A slightly lower correlation (r = 0.978) was found between the fat fraction of the screening sequence and the standard sequence. Bland-Altman revealed systematically lower R2* values obtained from the screening sequence and higher fat fraction values obtained with the standard sequence with a rather high variability in agreement. The screening sequence is a promising method with fast diagnosis of the predominant liver disease. It is capable of estimating the amount of hepatic fat and iron comparable to standard methods. • MRI plays a major role in the clarification of diffuse liver disease. • The screening sequence was introduced for the assessment of diffuse liver disease. • It is a fast and automated algorithm for the evaluation of hepatic iron and fat. • It is capable of estimating the amount of hepatic fat and iron.

  10. Chemical quality of tap water in Madrid: multicase control cancer study in Spain (MCC-Spain).

    PubMed

    Fernández-Navarro, Pablo; Villanueva, Cristina M; García-Pérez, Javier; Boldo, Elena; Goñi-Irigoyen, Fernando; Ulibarrena, Enrique; Rantakokko, Panu; García-Esquinas, Esther; Pérez-Gómez, Beatriz; Pollán, Marina; Aragonés, Nuria

    2017-02-01

    Chronic consumption of water, which contains contaminants, may give rise to adverse health effects. The Madrid region, covered by the population-based multicase-control (MCC-Spain) study, includes two drinking water supply areas. The different sources of the water, coupled together with the possible differences in water management, mean that there may be differences in drinking water quality. In the context of the MCC study, our aims were to describe contaminant concentrations in tap water drawn from various sampling points distributed around the region, assess these concentrations by reference to guideline values and study possible differences between the two supply areas. Tap water samples were collected from 34 sampling points in 7 towns in the Madrid region (19-29 April 2010), and 23 contaminants (metals, nitrates, disinfection by-product and Mutagen X levels) were quantified. We undertook a descriptive analysis of the contaminant concentrations in the water and compared them between the two water supply areas (Wilcoxon test). We created maps representing the distribution of the concentrations observed at water sampling points and assessed the correlations (Spearman's coefficient) between the different parameters measured. The concentrations of the contaminants were below guideline values. There were differences between the two supply areas in concentration of nitrates (p value = 0.0051) and certain disinfection by-products. While there were positive correlations (rho >0.70) among some disinfection by-products, no correlations were found in metals or nitrates. The differences in nitrate levels could be linked to differences in farming/industrial activities in the catchment areas and in disinfection by-products might be related to the existence of different treatment systems or bromine content in source waters.

  11. Connections between Transcription Downstream of Genes and cis-SAGe Chimeric RNA.

    PubMed

    Chwalenia, Katarzyna; Qin, Fujun; Singh, Sandeep; Tangtrongstittikul, Panjapon; Li, Hui

    2017-11-22

    cis-Splicing between adjacent genes (cis-SAGe) is being recognized as one way to produce chimeric fusion RNAs. However, its detail mechanism is not clear. Recent study revealed induction of transcriptions downstream of genes (DoGs) under osmotic stress. Here, we investigated the influence of osmotic stress on cis-SAGe chimeric RNAs and their connection to DoGs. We found,the absence of induction of at least some cis-SAGe fusions and/or their corresponding DoGs at early time point(s). In fact, these DoGs and their cis-SAGe fusions are inversely correlated. This negative correlation was changed to positive at a later time point. These results suggest a direct competition between the two categories of transcripts when total pool of readthrough transcripts is limited at an early time point. At a later time point, DoGs and corresponding cis-SAGe fusions are both induced, indicating that total readthrough transcripts become more abundant. Finally, we observed overall enhancement of cis-SAGe chimeric RNAs in KCl-treated samples by RNA-Seq analysis.

  12. A study of sound generation in subsonic rotors, volume 1

    NASA Technical Reports Server (NTRS)

    Chalupnik, J. D.; Clark, L. T.

    1975-01-01

    A model for the prediction of wake related sound generation by a single airfoil is presented. It is assumed that the net force fluctuation on an airfoil may be expressed in terms of the net momentum fluctuation in the near wake of the airfoil. The forcing function for sound generation depends on the spectra of the two point velocity correlations in the turbulent region near the airfoil trailing edge. The spectra of the two point velocity correlations were measured for the longitudinal and transverse components of turbulence in the wake of a 91.4 cm chord airfoil. A scaling procedure was developed using the turbulent boundary layer thickness. The model was then used to predict the radiated sound from a 5.1 cm chord airfoil. Agreement between the predicted and measured sound radiation spectra was good. The single airfoil results were extended to a rotor geometry, and various aerodynamic parameters were studied.

  13. Constraining compensated isocurvature perturbations using the CMB

    NASA Astrophysics Data System (ADS)

    Smith, Tristan L.; Rhiannon Smith, Kyle Yee, Julian Munoz, Daniel Grin

    2017-01-01

    Compensated isocurvature perturbations (CIPs) are variations in the cosmic baryon fraction which leave the total non-relativistic matter (and radiation) density unchanged. They are predicted by models of inflation which involve more than one scalar field, such as the curvaton scenario. At linear order, they leave the CMB two-point correlation function nearly unchanged: this is why existing constraints to CIPs are so much more permissive than constraints to typical isocurvature perturbations. Recent work articulated an efficient way to calculate the second order CIP effects on the CMB two-point correlation. We have implemented this method in order to explore constraints to the CIP amplitude using current Planck temperature and polarization data. In addition, we have computed the contribution of CIPs to the CMB lensing estimator which provides us with a novel method to use CMB data to place constraints on CIPs. We find that Planck data places a constraint to the CIP amplitude which is competitive with other methods.

  14. Observation of Noise Correlated by the Hawking Effect in a Water Tank.

    PubMed

    Euvé, L-P; Michel, F; Parentani, R; Philbin, T G; Rousseaux, G

    2016-09-16

    We measured the power spectrum and two-point correlation function for the randomly fluctuating free surface on the downstream side of a stationary flow with a maximum Froude number F_{max}≈0.85 reached above a localized obstacle. On such a flow the scattering of incident long wavelength modes is analogous to that responsible for black hole radiation (the Hawking effect). Our measurements of the noise show a clear correlation between pairs of modes of opposite energies. We also measure the scattering coefficients by applying the same analysis of correlations to waves produced by a wave maker.

  15. A new correlation coefficient for bivariate time-series data

    NASA Astrophysics Data System (ADS)

    Erdem, Orhan; Ceyhan, Elvan; Varli, Yusuf

    2014-11-01

    The correlation in time series has received considerable attention in the literature. Its use has attained an important role in the social sciences and finance. For example, pair trading in finance is concerned with the correlation between stock prices, returns, etc. In general, Pearson’s correlation coefficient is employed in these areas although it has many underlying assumptions which restrict its use. Here, we introduce a new correlation coefficient which takes into account the lag difference of data points. We investigate the properties of this new correlation coefficient. We demonstrate that it is more appropriate for showing the direction of the covariation of the two variables over time. We also compare the performance of the new correlation coefficient with Pearson’s correlation coefficient and Detrended Cross-Correlation Analysis (DCCA) via simulated examples.

  16. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    NASA Astrophysics Data System (ADS)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  17. Evaluation of tunnel seismic prediction (TSP) result using the Japanese highway rock mass classification system for Pahang-Selangor Raw Water Transfer Tunnel

    NASA Astrophysics Data System (ADS)

    Von, W. C.; Ismail, M. A. M.

    2017-10-01

    The knowing of geological profile ahead of tunnel face is significant to minimize the risk in tunnel excavation work and cost control in preventative measure. Due to mountainous area, site investigation with vertical boring is not recommended to obtain the geological profile for Pahang-Selangor Raw Water Transfer project. Hence, tunnel seismic prediction (TSP) method is adopted to predict the geological profile ahead of tunnel face. In order to evaluate the TSP results, IBM SPSS Statistic 22 is used to run artificial neural network (ANN) analysis to back calculate the predicted Rock Grade Points (JH) from actual Rock Grade Points (JH) using Vp, Vs and Vp/Vs from TSP. The results show good correlation between predicted Rock Grade points and actual Rock Grade Points (JH). In other words, TSP can provide geological profile prediction ahead of tunnel face significantly while allowing continuously TBM excavation works. Identifying weak zones or faults ahead of tunnel face is crucial for preventative measures to be carried out in advance for a safer tunnel excavation works.

  18. A Unified Scaling Law in Spiral Galaxies.

    PubMed

    Koda; Sofue; Wada

    2000-03-01

    We investigate the origin of a unified scaling relation in spiral galaxies. Observed spiral galaxies are spread on a plane in the three-dimensional logarithmic space of luminosity L, radius R, and rotation velocity V. The plane is expressed as L~&parl0;VR&parr0;alpha in the I passband, where alpha is a constant. On the plane, observed galaxies are distributed in an elongated region which looks like the shape of a surfboard. The well-known scaling relations L-V (Tully-Fisher [TF] relation), V-R (also the TF relation), and R-L (Freeman's law) can be understood as oblique projections of the surfboard-like plane into two-dimensional spaces. This unified interpretation of the known scaling relations should be a clue to understand the physical origin of all the relations consistently. Furthermore, this interpretation can also explain why previous studies could not find any correlation between TF residuals and radius. In order to clarify the origin of this plane, we simulate formation and evolution of spiral galaxies with the N-body/smoothed particle hydrodynamics method, including cooling, star formation, and stellar feedback. Initial conditions are set to 14 isolated spheres with two free parameters, such as mass and angular momentum. The cold dark matter (h=0.5, Omega0=1) cosmology is considered as a test case. The simulations provide the following two conclusions: (1) The slope of the plane is well reproduced but the zero point is not. This zero-point discrepancy could be solved in a low-density (Omega0<1) and high-expansion (h>0.5) cosmology. (2) The surfboard-shaped plane can be explained by the control of galactic mass and angular momentum.

  19. Employing people with psychiatric disabilities to engage homeless individuals through supported socialization: the Buddies Project.

    PubMed

    Fisk, Deborah; Frey, Jennifer

    2002-01-01

    This article describes the Buddies Project, a small time-limited grant that employed two part-time formerly homeless persons on a community-based mental health outreach team to participate in social activities with "difficult to engage" homeless individuals. We offer clinical examples that point to the success of this small supported socialization project. We suggest that employing people with psychiatric disabilities to participate in social activities with homeless persons with psychiatric disabilities can be an important tool to decrease homeless persons' social isolation and engage them into mental health treatment and independent housing.

  20. Single point dilution method for the quantitative analysis of antibodies to the gag24 protein of HIV-1.

    PubMed

    Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O

    1997-10-13

    In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.

  1. 2D Affine and Projective Shape Analysis.

    PubMed

    Bryner, Darshan; Klassen, Eric; Huiling Le; Srivastava, Anuj

    2014-05-01

    Current techniques for shape analysis tend to seek invariance to similarity transformations (rotation, translation, and scale), but certain imaging situations require invariance to larger groups, such as affine or projective groups. Here we present a general Riemannian framework for shape analysis of planar objects where metrics and related quantities are invariant to affine and projective groups. Highlighting two possibilities for representing object boundaries-ordered points (or landmarks) and parameterized curves-we study different combinations of these representations (points and curves) and transformations (affine and projective). Specifically, we provide solutions to three out of four situations and develop algorithms for computing geodesics and intrinsic sample statistics, leading up to Gaussian-type statistical models, and classifying test shapes using such models learned from training data. In the case of parameterized curves, we also achieve the desired goal of invariance to re-parameterizations. The geodesics are constructed by particularizing the path-straightening algorithm to geometries of current manifolds and are used, in turn, to compute shape statistics and Gaussian-type shape models. We demonstrate these ideas using a number of examples from shape and activity recognition.

  2. Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order

    DOE PAGES

    Favalli, Andrea; Croft, Stephen; Santi, Peter

    2015-06-15

    Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less

  3. On the Floating Point Performance of the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1997-01-01

    The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.

  4. Developing a bivariate spatial association measure: An integration of Pearson's r and Moran's I

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Il

    This research is concerned with developing a bivariate spatial association measure or spatial correlation coefficient, which is intended to capture spatial association among observations in terms of their point-to-point relationships across two spatial patterns. The need for parameterization of the bivariate spatial dependence is precipitated by the realization that aspatial bivariate association measures, such as Pearson's correlation coefficient, do not recognize spatial distributional aspects of data sets. This study devises an L statistic by integrating Pearson's r as an aspatial bivariate association measure and Moran's I as a univariate spatial association measure. The concept of a spatial smoothing scalar (SSS) plays a pivotal role in this task.

  5. Projective Structure from Two Uncalibrated Images: Structure from Motion and Recognition

    DTIC Science & Technology

    1992-09-01

    correspondence between points in Maybank 1990). The question, therefore, is why look for both views more of a problem, and hence, may make the...plane is fixed with respect to the 1987, Faugeras, Luong and Maybank 1992). The prob- camera coordinate frame. A rigid camera motion, there- lem of...the second reference Rieger-Lawton 1985, Faugeras and Maybank 1990, Hil- plane (assuming the four object points Pi, j = 1, ...,4, dreth 1991, Faugeras

  6. Evidence for biasing in the CfA survey

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1988-01-01

    Intrinsically bright galaxies appear systematically more correlated than faint galaxies in the Center for Astrophysics redshift survey. The amplification of the two-point correlation function behaves exponentially with luminosity, being essentially flat up to the knee of the luminosity function, then increasing markedly. The amplification reaches a factor of 3.5e + or - 0.4 in the very brightest galaxies. The effect is dominated by spirals rather than ellipticals, so that the correlation function of bright spirals becomes comparable to that of normal ellipticals. Similar results are obtained whether the correlation function is measured in two or three dimensions. The effect persists to separations of a correlation length or more, and is not confined to the cores of the Virgo, Coma, and Abell 1367 clusters, suggesting that the effect is caused by biasing, that is, galaxies kindle preferentially in more clustered regions, rather than by gravitational relaxation.

  7. Correlating Intravital Multi-Photon Microscopy to 3D Electron Microscopy of Invading Tumor Cells Using Anatomical Reference Points

    PubMed Central

    Karreman, Matthia A.; Mercier, Luc; Schieber, Nicole L.; Shibue, Tsukasa; Schwab, Yannick; Goetz, Jacky G.

    2014-01-01

    Correlative microscopy combines the advantages of both light and electron microscopy to enable imaging of rare and transient events at high resolution. Performing correlative microscopy in complex and bulky samples such as an entire living organism is a time-consuming and error-prone task. Here, we investigate correlative methods that rely on the use of artificial and endogenous structural features of the sample as reference points for correlating intravital fluorescence microscopy and electron microscopy. To investigate tumor cell behavior in vivo with ultrastructural accuracy, a reliable approach is needed to retrieve single tumor cells imaged deep within the tissue. For this purpose, fluorescently labeled tumor cells were subcutaneously injected into a mouse ear and imaged using two-photon-excitation microscopy. Using near-infrared branding, the position of the imaged area within the sample was labeled at the skin level, allowing for its precise recollection. Following sample preparation for electron microscopy, concerted usage of the artificial branding and anatomical landmarks enables targeting and approaching the cells of interest while serial sectioning through the specimen. We describe here three procedures showing how three-dimensional (3D) mapping of structural features in the tissue can be exploited to accurately correlate between the two imaging modalities, without having to rely on the use of artificially introduced markers of the region of interest. The methods employed here facilitate the link between intravital and nanoscale imaging of invasive tumor cells, enabling correlating function to structure in the study of tumor invasion and metastasis. PMID:25479106

  8. Casimir energy between two parallel plates and projective representation of the Poincaré group

    NASA Astrophysics Data System (ADS)

    Akita, Takamaru; Matsunaga, Mamoru

    2016-06-01

    The Casimir effect is a physical manifestation of zero point energy of quantum vacuum. In a relativistic quantum field theory, Poincaré symmetry of the theory seems, at first sight, to imply that nonzero vacuum energy is inconsistent with translational invariance of the vacuum. In the setting of two uniform boundary plates at rest, quantum fields outside the plates have (1 +2 )-dimensional Poincaré symmetry. Taking a massless scalar field as an example, we have examined the consistency between the Poincaré symmetry and the existence of the vacuum energy. We note that, in quantum theory, symmetries are represented projectively in general and show that the Casimir energy is connected to central charges appearing in the algebra of generators in the projective representations.

  9. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  10. KiDS-450: cosmological constraints from weak-lensing peak statistics - II: Inference from shear peaks using N-body simulations

    NASA Astrophysics Data System (ADS)

    Martinet, Nicolas; Schneider, Peter; Hildebrandt, Hendrik; Shan, HuanYuan; Asgari, Marika; Dietrich, Jörg P.; Harnois-Déraps, Joachim; Erben, Thomas; Grado, Aniello; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Nakajima, Reiko

    2018-02-01

    We study the statistics of peaks in a weak-lensing reconstructed mass map of the first 450 deg2 of the Kilo Degree Survey (KiDS-450). The map is computed with aperture masses directly applied to the shear field with an NFW-like compensated filter. We compare the peak statistics in the observations with that of simulations for various cosmologies to constrain the cosmological parameter S_8 = σ _8 √{Ω _m/0.3}, which probes the (Ωm, σ8) plane perpendicularly to its main degeneracy. We estimate S8 = 0.750 ± 0.059, using peaks in the signal-to-noise range 0 ≤ S/N ≤ 4, and accounting for various systematics, such as multiplicative shear bias, mean redshift bias, baryon feedback, intrinsic alignment, and shear-position coupling. These constraints are ˜ 25 per cent tighter than the constraints from the high significance peaks alone (3 ≤ S/N ≤ 4) which typically trace single-massive haloes. This demonstrates the gain of information from low-S/N peaks. However, we find that including S/N < 0 peaks does not add further information. Our results are in good agreement with the tomographic shear two-point correlation function measurement in KiDS-450. Combining shear peaks with non-tomographic measurements of the shear two-point correlation functions yields a ˜20 per cent improvement in the uncertainty on S8 compared to the shear two-point correlation functions alone, highlighting the great potential of peaks as a cosmological probe.

  11. Entanglement properties of boundary state and thermalization

    NASA Astrophysics Data System (ADS)

    Guo, Wu-zhong

    2018-06-01

    We discuss the regularized boundary state {e}^{-{τ}_0H}\\Big|{.B>}_a on two aspects in both 2D CFT and higher dimensional free field theory. One is its entanglement and correlation properties, which exhibit exponential decay in 2D CFT, the parameter 1 /τ 0 works as a mass scale. The other concerns with its time evolution, i.e., {e}^{-itH}{e}^{-{τ}_0H}\\Big|{.B>}_a . We investigate the Kubo-Martin-Schwinger (KMS) condition on correlation function of local operators to detect the thermal properties. Interestingly we find the correlation functions in the initial state {e}^{-{τ}_0H}\\Big|{.B>}_a also partially satisfy the KMS condition. In the limit t → ∞, the correlators will exactly satisfy the KMS condition. We generally analyse quantum quench by a pure state and obtain some constraints on the possible form of 2-point correlation function in the initial state if assuming they satisfies KMS condition in the final state. As a byproduct we find in an large τ 0 limit the thermal property of 2-point function in {e}^{-{τ}_0H}\\Big|{.B>}_a also appears.

  12. Water-related occupations and diet in two Roman coastal communities (Italy, first to third century AD): correlation between stable carbon and nitrogen isotope values and auricular exostosis prevalence.

    PubMed

    Crowe, Fiona; Sperduti, Alessandra; O'Connell, Tamsin C; Craig, Oliver E; Kirsanow, Karola; Germoni, Paola; Macchiarelli, Roberto; Garnsey, Peter; Bondioli, Luca

    2010-07-01

    The reconstruction of dietary patterns in the two Roman imperial age coastal communities of Portus and Velia (I-III AD) by means of stable isotope analysis of bone remains has exposed a certain degree of heterogeneity between and within the two samples. Results do not correlate with any discernible mortuary practices at either site, which might have pointed to differential social status. The present study tests the hypothesis of a possible connection between dietary habits and occupational activities in the two communities. Among skeletal markers of occupation, external auricular exostosis (EAE) has proved to be very informative. Clinical and retrospective epidemiological surveys have revealed a strong positive correlation between EAE development and habitual exposure to cold water. In this study, we show that there is a high rate of occurrence of EAE among adult males in both skeletal samples (21.1% in Portus and 35.3% in Velia). Further, there is a statistically significant higher prevalence of EAE among those individuals at Velia with very high nitrogen isotopic values. This points to fishing (coastal, low-water fishing) as the sea-related occupation most responsible for the onset of the ear pathology. For Portus, where the consumption of foods from sea and river seems to be more widespread through the population, and where the scenario of seaport and fluvial activities was much more complex than in Velia, a close correlation between EAE and fish consumption by fishermen is less easy to establish. (c) 2009 Wiley-Liss, Inc.

  13. An evaluation of potential sampling locations in a reservoir with emphasis on conserved spatial correlation structure.

    PubMed

    Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül

    2015-01-01

    In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.

  14. Sulfur in Cometary Dust

    NASA Technical Reports Server (NTRS)

    Fomenkova, M. N.

    1997-01-01

    The computer-intensive project consisted of the analysis and synthesis of existing data on composition of comet Halley dust particles. The main objective was to obtain a complete inventory of sulfur containing compounds in the comet Halley dust by building upon the existing classification of organic and inorganic compounds and applying a variety of statistical techniques for cluster and cross-correlational analyses. A student hired for this project wrote and tested the software to perform cluster analysis. The following tasks were carried out: (1) selecting the data from existing database for the proposed project; (2) finding access to a standard library of statistical routines for cluster analysis; (3) reformatting the data as necessary for input into the library routines; (4) performing cluster analysis and constructing hierarchical cluster trees using three methods to define the proximity of clusters; (5) presenting the output results in different formats to facilitate the interpretation of the obtained cluster trees; (6) selecting groups of data points common for all three trees as stable clusters. We have also considered the chemistry of sulfur in inorganic compounds.

  15. The relation between the quantum discord and quantum teleportation: The physical interpretation of the transition point between different quantum discord decay regimes

    NASA Astrophysics Data System (ADS)

    Roszak, K.; Cywiński, Ł.

    2015-10-01

    We study quantum teleportation via Bell-diagonal mixed states of two qubits in the context of the intrinsic properties of the quantum discord. We show that when the quantum-correlated state of the two qubits is used for quantum teleportation, the character of the teleportation efficiency changes substantially depending on the Bell-diagonal-state parameters, which can be seen when the worst-case-scenario or best-case-scenario fidelity is studied. Depending on the parameter range, one of two types of single-qubit states is hardest/easiest to teleport. The transition between these two parameter ranges coincides exactly with the transition between the range of classical correlation decay and quantum correlation decay characteristic for the evolution of the quantum discord. The correspondence provides a physical interpretation for the prominent feature of the decay of the quantum discord.

  16. Modeling Semantic Emotion Space Using a 3D Hypercube-Projection: An Innovative Analytical Approach for the Psychology of Emotions

    PubMed Central

    Trnka, Radek; Lačev, Alek; Balcar, Karel; Kuška, Martin; Tavel, Peter

    2016-01-01

    The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females) judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model. PMID:27148130

  17. Measurement of a solid-state triple point at the metal-insulator transition in VO2.

    PubMed

    Park, Jae Hyung; Coy, Jim M; Kasirga, T Serkan; Huang, Chunming; Fei, Zaiyao; Hunter, Scott; Cobden, David H

    2013-08-22

    First-order phase transitions in solids are notoriously challenging to study. The combination of change in unit cell shape, long range of elastic distortion and flow of latent heat leads to large energy barriers resulting in domain structure, hysteresis and cracking. The situation is worse near a triple point, where more than two phases are involved. The well-known metal-insulator transition in vanadium dioxide, a popular candidate for ultrafast optical and electrical switching applications, is a case in point. Even though VO2 is one of the simplest strongly correlated materials, experimental difficulties posed by the first-order nature of the metal-insulator transition as well as the involvement of at least two competing insulating phases have led to persistent controversy about its nature. Here we show that studying single-crystal VO2 nanobeams in a purpose-built nanomechanical strain apparatus allows investigation of this prototypical phase transition with unprecedented control and precision. Our results include the striking finding that the triple point of the metallic phase and two insulating phases is at the transition temperature, Ttr = Tc, which we determine to be 65.0 ± 0.1 °C. The findings have profound implications for the mechanism of the metal-insulator transition in VO2, but they also demonstrate the importance of this approach for mastering phase transitions in many other strongly correlated materials, such as manganites and iron-based superconductors.

  18. Correlation of VHI-10 to voice laboratory measurements across five common voice disorders.

    PubMed

    Gillespie, Amanda I; Gooding, William; Rosen, Clark; Gartner-Schmidt, Jackie

    2014-07-01

    To correlate change in Voice Handicap Index (VHI)-10 scores with corresponding voice laboratory measures across five voice disorders. Retrospective study. One hundred fifty patients aged >18 years with primary diagnosis of vocal fold lesions, primary muscle tension dysphonia-1, atrophy, unilateral vocal fold paralysis (UVFP), and scar. For each group, participants with the largest change in VHI-10 between two periods (TA and TB) were selected. The dates of the VHI-10 values were linked to corresponding acoustic/aerodynamic and audio-perceptual measures. Change in voice laboratory values were analyzed for correlation with each other and with VHI-10. VHI-10 scores were greater for patients with UVFP than other disorders. The only disorder-specific correlation between voice laboratory measure and VHI-10 was average phonatory airflow in speech for patients with UVFP. Average airflow in repeated phonemes was strongly correlated with average airflow in speech (r=0.75). Acoustic measures did not significantly change between time points. The lack of correlations between the VHI-10 change scores and voice laboratory measures may be due to differing constructs of each measure; namely, handicap versus physiological function. Presuming corroboration between these measures may be faulty. Average airflow in speech may be the most ecologically valid measure for patients with UVFP. Although aerodynamic measures changed between the time points, acoustic measures did not. Correlations to VHI-10 and change between time points may be found with other acoustic measures. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. Hole-ness of point clouds

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.

    2015-04-01

    Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.

  20. Two-phase pressure drop in a helical coil flow boiling system

    NASA Astrophysics Data System (ADS)

    Hardik, B. K.; Prabhu, S. V.

    2018-05-01

    The objective of the present work is to study the two-phase pressure drop in helical coils. Literature on the two-phase pressure drop in a helical coil suggests the complexity in flow boiling inside a helical coil due to secondary flow. Most of correlations reported in the literature on the two-phase pressure drop in a helical coil are limited to a specific operating range. No general correlation is available for a helical coil which is applicable for all fluids. In the present study, an experimental databank collected containing a total of 832 data points includes the data from the present study and from the literature. The data includes diabatic pressure drop of two fluids namely water and R123. Data covers a range of parameters namely a mass flux of 120-2058 kg/m2 s, a heat flux of 18-2831 kW/m2, an exit quality of 0.03-1, a density ratio of 32-1404 and a coil to tube diameter ratio of 14-58. The databank is compared with eighteen empirical correlations which include well referred correlations of straight tubes and the available correlations of helical coils. The straight tube correlations are not working well for the present data set. The helical coil correlations work reasonably well for the present databank. A correlation is suggested to predict the two-phase pressure drop in helical coils. The present study suggests that the influence of a helical coil is completely included in the single phase pressure drop correlation for helical coils.

  1. Switching Dynamics Between Two Movement Patterns Varies According to Time Interval

    NASA Astrophysics Data System (ADS)

    Hirakawa, Takehito; Suzuki, Hiroo; Okumura, Motoki; Gohara, Kazutoshi; Yamamoto, Yuji

    This study investigated the regularity that characterizes the behavior of dissipative dynamical systems excited by external temporal inputs for pointing movements. Right-handed healthy male participants were asked to continuously point their right index finger at two light-emitting diodes (LEDs) located in the oblique left and right directions in front of them. These movements were performed under two conditions: one in which the direction was repeated and one in which the directions were switched on a stochastic basis. These conditions consisted of 12 tempos (30, 36, 42, 48, 51, 54, 57, 60, 63, 66, 69, and 72 beats per minute). Data from the conditions under which the input pattern was repeated revealed two different trajectories in hyper-cylindrical state space ℳ, whereas the conditions under which the inputs were switched induced transitions between the two trajectories, which were considered to be excited attractors. The transitions between the two excited attractors were characterized by a self-similar structure. Moreover, the correlation dimensions increased as the tempos increased. These results suggest a relationship of D ∝ 1/T (T is the switching-time length; i.e. the condition) between temporal input and pointing behavior and that continuous pointing movements are regular rather than random noise.

  2. Interactive Correlation Analysis and Visualization of Climate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asay-Davis, Xylar Storm

    The project performed under this award, referred to from here on as CLARION (CoupLed simulations of Antarctic Ice-sheet/Ocean iNteractions), included important advances in two models of ice sheet and ocean interactions. Despite its short duration (one year), the project made significant progress on its three major foci. First, together with collaborator Daniel Martin at Lawrence Berkeley National Laboratory (LBNL), I developed the POPSICLES coupled ice sheet-ocean model to the point where it could perform a number of pan-Antarctic simulations under various forcing conditions. The results were presented at a number of major conferences and workshops worldwide, and are currently beingmore » incorporated into two manuscripts in preparation.« less

  4. Two-Color Pump-Probe Measurement of Photonic Quantum Correlations Mediated by a Single Phonon

    NASA Astrophysics Data System (ADS)

    Anderson, Mitchell D.; Tarrago Velez, Santiago; Seibold, Kilian; Flayac, Hugo; Savona, Vincenzo; Sangouard, Nicolas; Galland, Christophe

    2018-06-01

    We propose and demonstrate a versatile technique to measure the lifetime of the one-phonon Fock state using two-color pump-probe Raman scattering and spectrally resolved, time-correlated photon counting. Following pulsed laser excitation, the n =1 phonon Fock state is probabilistically prepared by projective measurement of a single Stokes photon. The detection of an anti-Stokes photon generated by a second, time-delayed laser pulse probes the phonon population with subpicosecond time resolution. We observe strongly nonclassical Stokes-anti-Stokes correlations, whose decay maps the single phonon dynamics. Our scheme can be applied to any Raman-active vibrational mode. It can be modified to measure the lifetime of n ≥1 Fock states or the phonon quantum coherences through the preparation and detection of two-mode entangled vibrational states.

  5. Two-Color Pump-Probe Measurement of Photonic Quantum Correlations Mediated by a Single Phonon.

    PubMed

    Anderson, Mitchell D; Tarrago Velez, Santiago; Seibold, Kilian; Flayac, Hugo; Savona, Vincenzo; Sangouard, Nicolas; Galland, Christophe

    2018-06-08

    We propose and demonstrate a versatile technique to measure the lifetime of the one-phonon Fock state using two-color pump-probe Raman scattering and spectrally resolved, time-correlated photon counting. Following pulsed laser excitation, the n=1 phonon Fock state is probabilistically prepared by projective measurement of a single Stokes photon. The detection of an anti-Stokes photon generated by a second, time-delayed laser pulse probes the phonon population with subpicosecond time resolution. We observe strongly nonclassical Stokes-anti-Stokes correlations, whose decay maps the single phonon dynamics. Our scheme can be applied to any Raman-active vibrational mode. It can be modified to measure the lifetime of n≥1 Fock states or the phonon quantum coherences through the preparation and detection of two-mode entangled vibrational states.

  6. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  7. A quantitative evaluation of two methods for preserving hair samples

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2003-01-01

    Hair samples are an increasingly important DNA source for wildlife studies, yet optimal storage methods and DNA degradation rates have not been rigorously evaluated. We tested amplification success rates over a one-year storage period for DNA extracted from brown bear (Ursus arctos) hair samples preserved using silica desiccation and -20C freezing. For three nuclear DNA microsatellites, success rates decreased significantly after a six-month time point, regardless of storage method. For a 1000 bp mitochondrial fragment, a similar decrease occurred after a two-week time point. Minimizing delays between collection and DNA extraction will maximize success rates for hair-based noninvasive genetic sampling projects.

  8. Craton Heterogeneity in the South American Lithosphere

    NASA Astrophysics Data System (ADS)

    Lloyd, S.; Van der Lee, S.; Assumpcao, M.; Feng, M.; Franca, G. S.

    2012-04-01

    We investigate structure of the lithosphere beneath South America using receiver functions, surface wave dispersion analysis, and seismic tomography. The data used include recordings from 20 temporary broadband seismic stations deployed across eastern Brazil (BLSP02) and from the Chile Ridge Subduction Project seismic array in southern Chile (CRSP). By jointly inverting Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh wave forms we obtain a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a correlation between crustal thickness and geologic age can be derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. We also invert for S velocity structure and estimate the depth of the lithosphere-asthenosphere boundary (LAB) in Precambrian South America. The new model reveals a relatively thin lithosphere throughout most of Precambrian South America (< 140 km). Comparing LAB depth with lithospheric age shows they are overall positively correlated, whereby the thickest lithosphere occurs in the relatively small Saõ Francisco craton (200 km). However, within the larger Amazonian craton the younger lithosphere is thicker, indicating that locally even larger cratons are not protected from erosion or reworking of the lithosphere.

  9. Comparing observer models and feature selection methods for a task-based statistical assessment of digital breast tomsynthesis in reconstruction space

    NASA Astrophysics Data System (ADS)

    Park, Subok; Zhang, George Z.; Zeng, Rongping; Myers, Kyle J.

    2014-03-01

    A task-based assessment of image quality1 for digital breast tomosynthesis (DBT) can be done in either the projected or reconstructed data space. As the choice of observer models and feature selection methods can vary depending on the type of task and data statistics, we previously investigated the performance of two channelized- Hotelling observer models in conjunction with 2D Laguerre-Gauss (LG) and two implementations of partial least squares (PLS) channels along with that of the Hotelling observer in binary detection tasks involving DBT projections.2, 3 The difference in these observers lies in how the spatial correlation in DBT angular projections is incorporated in the observer's strategy to perform the given task. In the current work, we extend our method to the reconstructed data space of DBT. We investigate how various model observers including the aforementioned compare for performing the binary detection of a spherical signal embedded in structured breast phantoms with the use of DBT slices reconstructed via filtered back projection. We explore how well the model observers incorporate the spatial correlation between different numbers of reconstructed DBT slices while varying the number of projections. For this, relatively small and large scan angles (24° and 96°) are used for comparison. Our results indicate that 1) given a particular scan angle, the number of projections needed to achieve the best performance for each observer is similar across all observer/channel combinations, i.e., Np = 25 for scan angle 96° and Np = 13 for scan angle 24°, and 2) given these sufficient numbers of projections, the number of slices for each observer to achieve the best performance differs depending on the channel/observer types, which is more pronounced in the narrow scan angle case.

  10. Do Between-Culture Differences Really Mean that People Are Different? A Look at Some Measures of Culture Effect Size.

    ERIC Educational Resources Information Center

    Matsumoto, David; Grissom, Robert J.; Dinnel, Dale L.

    2001-01-01

    Recommends four measures of cultural effect size appropriate for cross-cultural research (standardized difference between two sample means, probabilistic superiority effect size measure, Cohen's U1, and point biserial correlation), demonstrating their efficacy on two data sets from previously published studies and arguing for their use in future…

  11. Linear and quadratic static response functions and structure functions in Yukawa liquids.

    PubMed

    Magyar, Péter; Donkó, Zoltán; Kalman, Gabor J; Golden, Kenneth I

    2014-08-01

    We compute linear and quadratic static density response functions of three-dimensional Yukawa liquids by applying an external perturbation potential in molecular dynamics simulations. The response functions are also obtained from the equilibrium fluctuations (static structure factors) in the system via the fluctuation-dissipation theorems. The good agreement of the quadratic response functions, obtained in the two different ways, confirms the quadratic fluctuation-dissipation theorem. We also find that the three-point structure function may be factorizable into two-point structure functions, leading to a cluster representation of the equilibrium triplet correlation function.

  12. Measurement of Correlation Between Flow Density, Velocity, and Density*velocity(sup 2) with Far Field Noise in High Speed Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.

    2002-01-01

    To locate noise sources in high-speed jets, the sound pressure fluctuations p', measured at far field locations, were correlated with each of radial velocity v, density rho, and phov(exp 2) fluctuations measured from various points in jet plumes. The experiments follow the cause-and-effect method of sound source identification, where correlation is related to the first, and correlation to the second source terms of Lighthill's equation. Three fully expanded, unheated plumes of Mach number 0.95, 1.4 and 1.8 were studied for this purpose. The velocity and density fluctuations were measured simultaneously using a recently developed, non-intrusive, point measurement technique based on molecular Rayleigh scattering. It was observed that along the jet centerline the density fluctuation spectra S(sub rho) have different shapes than the radial velocity spectra S(sub v), while data obtained from the peripheral shear layer show similarity between the two spectra. Density fluctuations in the jet showed significantly higher correlation, than either rhov(sub 2) or v fluctuations. It is found that a single point correlation from the peak sound emitting region at the end of the potential core can account for nearly 10% of all noise at 30 to the jet axis. The correlation, representing the effectiveness of a longitudinal quadrupole in generating noise 90 to the jet axis, is found to be zero within experimental uncertainty. In contrast rhov(exp 2) fluctuations were better correlated with sound pressure fluctuation at the 30 location. The strongest source of sound is found to lie at the centerline and beyond the end of potential core.

  13. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Hyekyun

    Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation ofmore » the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior projections over more than 60° appear to be necessary for reliable estimations. The mean 3D RMSE during beam delivery after the simple linear model had established with a prior 90° projection data was 0.42 mm for VMAT and 0.45 mm for IMRT. Conclusions: The proposed method does not require any internal/external correlation or statistical modeling to estimate the target trajectory and can be used for both retrospective image-guided radiotherapy with CBCT projection images and real-time target position monitoring for respiratory gating or tracking.« less

  14. Economic Expansion Is a Major Determinant of Physician Supply and Utilization

    PubMed Central

    Cooper, Richard A; Getzen, Thomas E; Laud, Prakash

    2003-01-01

    Objective To assess the relationship between levels of economic development and the supply and utilization of physicians. Data Sources Data were obtained from the American Medical Association, American Osteopathic Association, Organization for Economic Cooperation and Development (OECD), Bureau of Health Professions, Bureau of Labor Statistics, Bureau of Economic Analysis, Census Bureau, Health Care Financing Administration, and historical sources. Study Design Economic development, expressed as real per capita gross domestic product (GDP) or personal income, was correlated with per capita health care labor and physician supply within countries and states over periods of time spanning 25–70 years and across countries, states, and metropolitan statistical areas (MSAs) at multiple points in time over periods of up to 30 years. Longitudinal data were analyzed in four complementary ways: (1) simple univariate regressions; (2) regressions in which temporal trends were partialled out; (3) time series comparing percentage differences across segments of time; and (4) a bivariate Granger causality test. Cross-sectional data were assessed at multiple time points by means of univariate regression analyses. Principal Findings Under each analytic scenario, physician supply correlated with differences in GDP or personal income. Longitudinal correlations were associated with temporal lags of approximately 5 years for health employment and 10 years for changes in physician supply. The magnitude of changes in per capita physician supply in the United States was equivalent to differences of approximately 0.75 percent for each 1.0 percent difference in GDP. The greatest effects of economic expansion were on the medical specialties, whereas the surgical and hospital-based specialties were affected to a lesser degree, and levels of economic expansion had little influence on family/general practice. Conclusions Economic expansion has a strong, lagged relationship with changes in physician supply. This suggests that economic projections could serve as a gauge for projecting the future utilization of physician services. PMID:12785567

  15. New generalized corresponding states correlation for surface tension of normal saturated liquids

    NASA Astrophysics Data System (ADS)

    Yi, Huili; Tian, Jianxiang

    2015-08-01

    A new simple correlation based on the principle of corresponding state is proposed to estimate the temperature-dependent surface tension of normal saturated liquids. The new correlation contains three coefficients obtained by fitting 17,051 surface tension data of 38 saturated normal liquids. These 38 liquids contain refrigerants, hydrocarbons and some other inorganic liquids. The new correlation requires only the triple point temperature, triple point surface tension and critical point temperature as input and is able to well represent the experimental surface tension data for each of the 38 saturated normal liquids from the triple temperature up to the point near the critical point. The new correlation gives absolute average deviations (AAD) values below 3% for all of these 38 liquids with the only exception being octane with AAD=4.30%. Thus, the new correlation gives better overall results in comparison with other correlations for these 38 normal saturated liquids.

  16. Calibration of Seismic Attributes for Reservoir Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pennington, Wayne D.; Acevedo, Horacio; Green, Aaron

    2002-01-29

    This project has completed the initially scheduled third year of the contract, and is beginning a fourth year, designed to expand upon the tech transfer aspects of the project. From the Stratton data set, demonstrated that an apparent correlation between attributes derived along `phantom' horizons are artifacts of isopach changes; only if the interpreter understands that the interpretation is based on this correlation with bed thickening or thinning, can reliable interpretations of channel horizons and facies be made. From the Boonsville data set , developed techniques to use conventional seismic attributes, including seismic facies generated under various neural network procedures,more » to subdivide regional facies determined from logs into productive and non-productive subfacies, and developed a method involving cross-correlation of seismic waveforms to provide a reliable map of the various facies present in the area. The Teal South data set provided a surprising set of data, leading us to develop a pressure-dependent velocity relationship and to conclude that nearby reservoirs are undergoing a pressure drop in response to the production of the main reservoir, implying that oil is being lost through their spill points, never to be produced. The Wamsutter data set led to the use of unconventional attributes including lateral incoherence and horizon-dependent impedance variations to indicate regions of former sand bars and current high pressure, respectively, and to evaluation of various upscaling routines.« less

  17. Nonlinear Entanglement and its Application to Generating Cat States

    NASA Astrophysics Data System (ADS)

    Shen, Y.; Assad, S. M.; Grosse, N. B.; Li, X. Y.; Reid, M. D.; Lam, P. K.

    2015-03-01

    The Einstein-Podolsky-Rosen (EPR) paradox, which was formulated to argue for the incompleteness of quantum mechanics, has since metamorphosed into a resource for quantum information. The EPR entanglement describes the strength of linear correlations between two objects in terms of a pair of conjugate observables in relation to the Heisenberg uncertainty limit. We propose that entanglement can be extended to include nonlinear correlations. We examine two driven harmonic oscillators that are coupled via third-order nonlinearity can exhibit quadraticlike nonlinear entanglement which, after a projective measurement on one of the oscillators, collapses the other into a cat state of tunable size.

  18. Nonlinear entanglement and its application to generating cat States.

    PubMed

    Shen, Y; Assad, S M; Grosse, N B; Li, X Y; Reid, M D; Lam, P K

    2015-03-13

    The Einstein-Podolsky-Rosen (EPR) paradox, which was formulated to argue for the incompleteness of quantum mechanics, has since metamorphosed into a resource for quantum information. The EPR entanglement describes the strength of linear correlations between two objects in terms of a pair of conjugate observables in relation to the Heisenberg uncertainty limit. We propose that entanglement can be extended to include nonlinear correlations. We examine two driven harmonic oscillators that are coupled via third-order nonlinearity can exhibit quadraticlike nonlinear entanglement which, after a projective measurement on one of the oscillators, collapses the other into a cat state of tunable size.

  19. Superconductivity from a non-Fermi-liquid metal: Kondo fluctuation mechanism in slave-fermion theory

    NASA Astrophysics Data System (ADS)

    Kim, Ki-Seok

    2010-03-01

    We propose Kondo fluctuation mechanism of superconductivity, differentiated from the spin-fluctuation theory as the standard model for unconventional superconductivity in the weak-coupling approach. Based on the U(1) slave-fermion representation of an effective Anderson lattice model, where localized spins are described by the Schwinger boson theory and hybridization or Kondo fluctuations weaken antiferromagnetic correlations of localized spins, we found an antiferromagnetic quantum critical point from an antiferromagnetic metal to a heavy-fermion metal in our recent study. The Kondo-induced antiferromagnetic quantum critical point was shown to be described by both conduction electrons and fermionic holons interacting with critical spin fluctuations given by deconfined bosonic spinons with a spin quantum number 1/2. Surprisingly, such critical modes turned out to be described by the dynamical exponent z=3 , giving rise to the well-known non-Fermi-liquid physics such as the divergent Grüneisen ratio with an exponent 2/3 and temperature-linear resistivity in three dimensions. We find that the z=3 antiferromagnetic quantum critical point becomes unstable against superconductivity, where critical spinon excitations give rise to pairing correlations between conduction electrons and between fermionic holons, respectively, via hybridization fluctuations. Such two kinds of pairing correlations result in multigap unconventional superconductivity around the antiferromagnetic quantum critical point of the slave-fermion theory, where s -wave pairing is not favored generically due to strong correlations. We show that the ratio between each superconducting gap for conduction electrons Δc and holons Δf and the transition temperature Tc is 2Δc/Tc˜9 and 2Δf/Tc˜O(10-1) , remarkably consistent with CeCoIn5 . A fingerprint of the Kondo mechanism is emergence of two kinds of resonance modes in not only spin but also charge fluctuations, where the charge resonance mode at an antiferromagnetic wave vector originates from d -wave pairing of spinless holons. We discuss how the Kondo fluctuation theory differs from the spin-fluctuation approach.

  20. Anesthetic technique for inferior alveolar nerve block: a new approach

    PubMed Central

    PALTI, Dafna Geller; de ALMEIDA, Cristiane Machado; RODRIGUES, Antonio de Castro; ANDREO, Jesus Carlos; LIMA, José Eduardo Oliveira

    2011-01-01

    Background Effective pain control in Dentistry may be achieved by local anesthetic techniques. The success of the anesthetic technique in mandibular structures depends on the proximity of the needle tip to the mandibular foramen at the moment of anesthetic injection into the pterygomandibular region. Two techniques are available to reach the inferior alveolar nerve where it enters the mandibular canal, namely indirect and direct; these techniques differ in the number of movements required. Data demonstrate that the indirect technique is considered ineffective in 15% of cases and the direct technique in 1329% of cases. Objective Objective: The aim of this study was to describe an alternative technique for inferior alveolar nerve block using several anatomical points for reference, simplifying the procedure and enabling greater success and a more rapid learning curve. Materials and Methods A total of 193 mandibles (146 with permanent dentition and 47 with primary dentition) from dry skulls were used to establish a relationship between the teeth and the mandibular foramen. By using two wires, the first passing through the mesiobuccal groove and middle point of the mesial slope of the distolingual cusp of the primary second molar or permanent first molar (right side), and the second following the oclusal plane (left side), a line can be achieved whose projection coincides with the left mandibular foramen. Results The obtained data showed correlation in 82.88% of cases using the permanent first molar, and in 93.62% of cases using the primary second molar. Conclusion This method is potentially effective for inferior alveolar nerve block, especially in Pediatric Dentistry. PMID:21437463

  1. Auroras Now! - Auroral nowcasting service for Hotels in Finnish Lapland and its performance during winter 2003-2004

    NASA Astrophysics Data System (ADS)

    Kauristie, K.; Mälkki, A.; Pulkkinen, A.; Nevanlinna, H.; Ketola, A.; Tulkki, V.; Raita, T.; Blanco, A.

    2004-12-01

    European Space Agency is currently supporting 17 Service Development Activities (SDA) within its Space Weather Pilot Project. Auroras Now!, one of the SDAs, has been operated during November 2003 - March 2004 as its pilot season. The service includes a public part freely accessible in Internet (http://aurora.fmi.fi) and a private part visible only to the customers of two hotels in the Finnish Lapland through the hotels' internal TV-systems. The nowcasting system is based on the magnetic recordings of two geophysical observatories, Sodankylä (SOD, MLAT ~64 N) and Nurmijärvi (NUR, MLAT ~57 N). The probability of auroral occurrence is continuously characterised with an empirically determined three-level scale. The index is updated once per hour and based on the magnetic field variations recorded at the observatories. During dark hours the near-real time auroral images acquired at SOD are displayed. The hotel service also includes cloudiness predictions for the coming night. During the pilot season the reliability of the three-level magnetic alarm system was weekly evaluated by comparing its prediction with auroral observations by the nearby all-sky camera. Successful hits and failures were scored according to predetermined rules. The highest credit points when it managed to spot auroras in a timely manner and predict their brightness correctly. Maximum penalty points were given when the alarm missed clear bright auroras lasting for more than one hour. In this presentation we analyse the results of the evaluation, present some ideas to further sharpen the procedure, and discuss more generally the correlation between local auroral and magnetic activity.

  2. On the background independence of two-dimensional topological gravity

    NASA Astrophysics Data System (ADS)

    Imbimbo, Camillo

    1995-04-01

    We formulate two-dimensional topological gravity in a background covariant Lagrangian framework. We derive the Ward identities which characterize the dependence of physical correlators on the background world-sheet metric defining the gauge-slice. We point out the existence of an "anomaly" in Ward identitites involving correlators of observables with higher ghost number. This "anomaly" represents an obstruction for physical correlators to be globally defined forms on moduli space which could be integrated in a background independent way. Starting from the anomalous Ward identities, we derive "descent" equations whose solutions are cocycles of the Lie algebra of the diffeomorphism group with values in the space of local forms on the moduli space. We solve the descent equations and provide explicit formulas for the cocycles, which allow for the definition of background independent integrals of physical correlators on the moduli space.

  3. Weighted regularized statistical shape space projection for breast 3D model reconstruction.

    PubMed

    Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González

    2018-07-01

    The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Reviewing Teamwork Activities.

    ERIC Educational Resources Information Center

    Larcher, Bob

    2001-01-01

    When working on tasks or projects, teams typically go through four phases: initiation, creation, elaboration, and completion. Each phase is explained, and two examples illustrate how the model is used after an orienteering exercise to help corporate management teams understand their way of functioning and identify strengths and development points.…

  5. WHEN TWO WRONGS MAKE A RIGHT: SECOND BEST POINT-NONPOINT TRADING RATIOS. (R828684C004)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  6. Quick-connect threaded attachment joint

    NASA Technical Reports Server (NTRS)

    Lucy, M. H.; Messick, W. R.; Vasquez, P.

    1979-01-01

    Joint is self-aligning and tightens with only sixty-five degrees of rotation for quick connects and disconnects. Made of injection-molded plastics or cast or machined aluminum, joint can carry wires, tubes, liquids, or gases. When two parts of joint are brought together, their shapes align them. Small projections on male section and slots on female section further aid alignment; slight rotation of male form engages projections in slots. At this point, threads engage and male section is rotated until joint is fully engaged.

  7. EPSAT - A workbench for designing high-power systems for the space environment

    NASA Technical Reports Server (NTRS)

    Kuharski, R. A.; Jongeward, G. A.; Wilcox, K. G.; Kennedy, E. M.; Stevens, N. J.; Putnam, R. M.; Roche, J. C.

    1990-01-01

    The Environment Power System Analysis Tool (EPSAT) is being developed to provide space power system design engineers with an analysis tool for determining the performance of power systems in both naturally occurring and self-induced environments. This paper presents the results of the project after two years of a three-year development program. The relevance of the project result for SDI are pointed out, and models of the interaction of the environment and power systems are discussed.

  8. Sears Point Tidal Marsh Restoration Project: Phase I

    EPA Pesticide Factsheets

    Information about the SFBWQP Sears Point Tidal Marsh Restoration Project: Phase I project, part of an EPA competitive grant program to improve SF Bay water quality focused on restoring impaired waters and enhancing aquatic resources.

  9. Intermittency and exotic channels

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Peschanski, R.

    1994-11-01

    It is pointed out that accurate measurements of short-range two-particle correlations in like-charge Kπ and in π0π0 channels should be very helpful in determining the origin of the ``intermittency'' phenomenon observed recently for the like-charge pion pairs.

  10. Connection between two statistical approaches for the modelling of particle velocity and concentration distributions in turbulent flow: The mesoscopic Eulerian formalism and the two-point probability density function method

    NASA Astrophysics Data System (ADS)

    Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre

    2006-12-01

    The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.

  11. Fast Estimation of Defect Profiles from the Magnetic Flux Leakage Signal Based on a Multi-Power Affine Projection Algorithm

    PubMed Central

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-01-01

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314

  12. Visualizing nD Point Clouds as Topological Landscape Profiles to Guide Local Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Heine, Christian; Weber, Gunther H.

    2012-05-04

    Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity.We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and non-overlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phasemore » utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. In conclusion, this analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.« less

  13. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    PubMed

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-09-04

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  14. Cosmological velocity correlations - Observations and model predictions

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Davis, Marc; Strauss, Michael A.; White, Simon D. M.; Yahil, Amos

    1989-01-01

    By applying the present simple statistics for two-point cosmological peculiar velocity-correlation measurements to the actual data sets of the Local Supercluster spiral galaxy of Aaronson et al. (1982) and the elliptical galaxy sample of Burstein et al. (1987), as well as to the velocity field predicted by the distribution of IRAS galaxies, a coherence length of 1100-1600 km/sec is obtained. Coherence length is defined as that separation at which the correlations drop to half their zero-lag value. These results are compared with predictions from two models of large-scale structure formation: that of cold dark matter and that of baryon isocurvature proposed by Peebles (1980). N-body simulations of these models are performed to check the linear theory predictions and measure sampling fluctuations.

  15. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  16. Building energy information systems: Synthesis of costs, savings, and best-practice uses

    DOE PAGES

    Granderson, Jessica; Lin, Guanjing

    2016-02-19

    Building energy information systems (EIS) are a powerful customer-facing monitoring and analytical technology that can enable up to 20% site energy savings for buildings. Few technologies are as heavily marketed, but in spite of their potential, EIS remain an under-adopted emerging technology. One reason is the lack of information on purchase costs and associated energy savings. While insightful, the growing body of individual case studies has not provided industry the information needed to establish the business case for investment. Vastly different energy and economic metrics prevent generalizable conclusions. This paper addresses three common questions concerning EIS use: what are themore » costs, what have users saved, and which best practices drive deeper savings? We present a large-scale assessment of the value proposition for EIS use based on data from over two-dozen organizations. Participants achieved year-over-year median site and portfolio savings of 17% and 8%, respectively; they reported that this performance would not have been possible without the EIS. The median five-year cost of EIS software ownership (up-front and ongoing costs) was calculated to be $1,800 per monitoring point (kilowatt meter points were most common), with a median portfolio-wide implementation size of approximately 200 points. In this paper, we present an analysis of the relationship between key implementation factors and achieved energy reductions. Extent of efficiency projects, building energy performance prior to EIS installation, depth of metering, and duration of EIS were strongly correlated with greater savings. As a result, we also identify the best practices use of EIS associated with greater energy savings.« less

  17. Building energy information systems: Synthesis of costs, savings, and best-practice uses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Lin, Guanjing

    Building energy information systems (EIS) are a powerful customer-facing monitoring and analytical technology that can enable up to 20% site energy savings for buildings. Few technologies are as heavily marketed, but in spite of their potential, EIS remain an under-adopted emerging technology. One reason is the lack of information on purchase costs and associated energy savings. While insightful, the growing body of individual case studies has not provided industry the information needed to establish the business case for investment. Vastly different energy and economic metrics prevent generalizable conclusions. This paper addresses three common questions concerning EIS use: what are themore » costs, what have users saved, and which best practices drive deeper savings? We present a large-scale assessment of the value proposition for EIS use based on data from over two-dozen organizations. Participants achieved year-over-year median site and portfolio savings of 17% and 8%, respectively; they reported that this performance would not have been possible without the EIS. The median five-year cost of EIS software ownership (up-front and ongoing costs) was calculated to be $1,800 per monitoring point (kilowatt meter points were most common), with a median portfolio-wide implementation size of approximately 200 points. In this paper, we present an analysis of the relationship between key implementation factors and achieved energy reductions. Extent of efficiency projects, building energy performance prior to EIS installation, depth of metering, and duration of EIS were strongly correlated with greater savings. As a result, we also identify the best practices use of EIS associated with greater energy savings.« less

  18. 7 CFR 4280.42 - Application evaluation and selection.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Project plans, its management, and, if applicable, its products and operating plans. (The business plan...) Nature of the Project. Rural Development will award up to 60 points based on whether the Project: (i) Is... connection project (such as streets or utilities)—20 points; (ii) Provides Technical Assistance to rural...

  19. Neurochemical, morphologic, and laminar characterization of cortical projection neurons in the cingulate motor areas of the macaque monkey

    NASA Technical Reports Server (NTRS)

    Nimchinsky, E. A.; Hof, P. R.; Young, W. G.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)

    1996-01-01

    The primate cingulate gyrus contains multiple cortical areas that can be distinguished by several neurochemical features, including the distribution of neurofilament protein-enriched pyramidal neurons. In addition, connectivity and functional properties indicate that there are multiple motor areas in the cortex lining the cingulate sulcus. These motor areas were targeted for analysis of potential interactions among regional specialization, connectivity, and cellular characteristics such as neurochemical profile and morphology. Specifically, intracortical injections of retrogradely transported dyes and intracellular injection were combined with immunocytochemistry to investigate neurons projecting from the cingulate motor areas to the putative forelimb region of the primary motor cortex, area M1. Two separate groups of neurons projecting to area M1 emanated from the cingulate sulcus, one anterior and one posterior, both of which furnished commissural and ipsilateral connections with area M1. The primary difference between the two populations was laminar origin, with the anterior projection originating largely in deep layers, and the posterior projection taking origin equally in superficial and deep layers. With regard to cellular morphology, the anterior projection exhibited more morphologic diversity than the posterior projection. Commissural projections from both anterior and posterior fields originated largely in layer VI. Neurofilament protein distribution was a reliable tool for localizing the two projections and for discriminating between them. Comparable proportions of the two sets of projection neurons contained neurofilament protein, although the density and distribution of the total population of neurofilament protein-enriched neurons was very different in the two subareas of origin. Within a projection, the participating neurons exhibited a high degree of morphologic heterogeneity, and no correlation was observed between somatodendritic morphology and neurofilament protein content. Thus, although the neurons that provide the anterior and posterior cingulate motor projections to area M1 differ morphologically and in laminar origin, their neurochemical profiles are similar with respect to neurofilament protein. This suggests that neurochemical phenotype may be a more important unifying feature for corticocortical projections than morphology.

  20. Leniency and halo effects in marking undergraduate short research projects.

    PubMed

    McKinstry, Brian H; Cameron, Helen S; Elton, Robert A; Riley, Simon C

    2004-11-29

    Supervisors are often involved in the assessment of projects they have supervised themselves. Previous research suggests that detailed marking sheets may alleviate leniency and halo effects. We set out to determine if, despite using such a marking schedule, leniency and halo effects were evident in the supervisors' marking of undergraduate short research projects (special study modules (SSM)). Review of grades awarded by supervisors, second markers and control markers to the written reports of 4th year medical students who had participated in an SSM during two full academic years (n = 399). Paired t-tests were used to compare mean marks, Pearson correlation to look at agreement between marks and multiple linear regression to test the prediction of one mark from several others adjusted for one another. There was a highly significant difference of approximately half a grade between supervisors and second markers with supervisors marking higher. (t = 3.12, p < 0.01, difference in grade score = 0.42, 95% CI for mean difference 0.18-0.80). There was a high correlation between the two marks awarded for performance of the project and the written report by the supervisor (r = 0.75), but a low-modest correlation between supervisor and second marker (r = 0.28). Linear regression analysis of the influence of the supervisors' mark for performance on their mark for the report gave a non-significant result. This suggests a leniency effect but no halo effect. This study shows that with the use of structured marking sheet for assessment of undergraduate medical students, supervisors marks are not associated with a halo effect, but leniency does occur. As supervisor assessment is becoming more common in both under graduate and postgraduate teaching new ways to improve objectivity in marking and to address the leniency of supervisors should be sought.

Top