Sample records for determinantal point processes

  1. Enumerative Geometry of Hyperplane Arrangements

    DTIC Science & Technology

    2012-05-11

    they show that the Zariski closure of M(L) as viewed as a subvariety of (Cn+1)` is not the variety given only by the solutions to the determinantal ...equations supplied by L. These determinantal equations are presented nicely in Terao [28]. We study in detail the case when L is the lattice of the braid...4.1.1. Let A be a generic arrangement of 4 lines through 8 points in general position in P2. There are no triple points inA, so there are no determinantal

  2. Discovering Implicit Networks from Point Process Data

    DTIC Science & Technology

    2013-08-03

    Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 SOCIAL NETWORK ANALYSIS Szell et al, Nature 2012 Saturday, August 3, 13 (a) Adjacency...processes: ‣ Seismology ‣ Epidemiology ‣ Economics ‣ Modeling dependence is challenging - “beyond Poisson” ‣ Strauss and Gibbs Processes ‣ Determinantal

  3. On Pfaffian Random Point Fields

    NASA Astrophysics Data System (ADS)

    Kargin, V.

    2014-02-01

    We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.

  4. Decay of Complex-Time Determinantal and Pfaffian Correlation Functionals in Lattices

    NASA Astrophysics Data System (ADS)

    Aza, N. J. B.; Bru, J.-B.; de Siqueira Pedra, W.

    2018-04-01

    We supplement the determinantal and Pfaffian bounds of Sims and Warzel (Commun Math Phys 347:903-931, 2016) for many-body localization of quasi-free fermions, by considering the high dimensional case and complex-time correlations. Our proof uses the analyticity of correlation functions via the Hadamard three-line theorem. We show that the dynamical localization for the one-particle system yields the dynamical localization for the many-point fermionic correlation functions, with respect to the Hausdorff distance in the determinantal case. In Sims and Warzel (2016), a stronger notion of decay for many-particle configurations was used but only at dimension one and for real times. Considering determinantal and Pfaffian correlation functionals for complex times is important in the study of weakly interacting fermions.

  5. Decay of Complex-Time Determinantal and Pfaffian Correlation Functionals in Lattices

    NASA Astrophysics Data System (ADS)

    Aza, N. J. B.; Bru, J.-B.; de Siqueira Pedra, W.

    2018-06-01

    We supplement the determinantal and Pfaffian bounds of Sims and Warzel (Commun Math Phys 347:903-931, 2016) for many-body localization of quasi-free fermions, by considering the high dimensional case and complex-time correlations. Our proof uses the analyticity of correlation functions via the Hadamard three-line theorem. We show that the dynamical localization for the one-particle system yields the dynamical localization for the many-point fermionic correlation functions, with respect to the Hausdorff distance in the determinantal case. In Sims and Warzel (2016), a stronger notion of decay for many-particle configurations was used but only at dimension one and for real times. Considering determinantal and Pfaffian correlation functionals for complex times is important in the study of weakly interacting fermions.

  6. Renormalized Energy Concentration in Random Matrices

    NASA Astrophysics Data System (ADS)

    Borodin, Alexei; Serfaty, Sylvia

    2013-05-01

    We define a "renormalized energy" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of Sandier and Serfaty (From the Ginzburg-Landau model to vortex lattice problems, 2012; 1D log-gases and the renormalized energy, 2013). Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix β-sine processes on the real line ( β = 1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the β = 2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.

  7. Free Fermions and the Classical Compact Groups

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  8. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  9. Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory

    NASA Astrophysics Data System (ADS)

    Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.

    2008-11-01

    It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.

  10. Superconductivity and non-Fermi liquid behavior near a nematic quantum critical point.

    PubMed

    Lederer, Samuel; Schattner, Yoni; Berg, Erez; Kivelson, Steven A

    2017-05-09

    Using determinantal quantum Monte Carlo, we compute the properties of a lattice model with spin [Formula: see text] itinerant electrons tuned through a quantum phase transition to an Ising nematic phase. The nematic fluctuations induce superconductivity with a broad dome in the superconducting [Formula: see text] enclosing the nematic quantum critical point. For temperatures above [Formula: see text], we see strikingly non-Fermi liquid behavior, including a "nodal-antinodal dichotomy" reminiscent of that seen in several transition metal oxides. In addition, the critical fluctuations have a strong effect on the low-frequency optical conductivity, resulting in behavior consistent with "bad metal" phenomenology.

  11. Robust and Efficient Spin Purification for Determinantal Configuration Interaction.

    PubMed

    Fales, B Scott; Hohenstein, Edward G; Levine, Benjamin G

    2017-09-12

    The limited precision of floating point arithmetic can lead to the qualitative and even catastrophic failure of quantum chemical algorithms, especially when high accuracy solutions are sought. For example, numerical errors accumulated while solving for determinantal configuration interaction wave functions via Davidson diagonalization may lead to spin contamination in the trial subspace. This spin contamination may cause the procedure to converge to roots with undesired ⟨Ŝ 2 ⟩, wasting computer time in the best case and leading to incorrect conclusions in the worst. In hopes of finding a suitable remedy, we investigate five purification schemes for ensuring that the eigenvectors have the desired ⟨Ŝ 2 ⟩. These schemes are based on projection, penalty, and iterative approaches. All of these schemes rely on a direct, graphics processing unit-accelerated algorithm for calculating the S 2 c matrix-vector product. We assess the computational cost and convergence behavior of these methods by application to several benchmark systems and find that the first-order spin penalty method is the optimal choice, though first-order and Löwdin projection approaches also provide fast convergence to the desired spin state. Finally, to demonstrate the utility of these approaches, we computed the lowest several excited states of an open-shell silver cluster (Ag 19 ) using the state-averaged complete active space self-consistent field method, where spin purification was required to ensure spin stability of the CI vector coefficients. Several low-lying states with significant multiply excited character are predicted, suggesting the value of a multireference approach for modeling plasmonic nanomaterials.

  12. The Polyanalytic Ginibre Ensembles

    NASA Astrophysics Data System (ADS)

    Haimi, Antti; Hedenmalm, Haakan

    2013-10-01

    For integers n, q=1,2,3,… , let Pol n, q denote the -linear space of polynomials in z and , of degree ≤ n-1 in z and of degree ≤ q-1 in . We supply Pol n, q with the inner product structure of the resulting Hilbert space is denoted by Pol m, n, q . Here, it is assumed that m is a positive real. We let K m, n, q denote the reproducing kernel of Pol m, n, q , and study the associated determinantal process, in the limit as m, n→+∞ while n= m+O(1); the number q, the degree of polyanalyticity, is kept fixed. We call these processes polyanalytic Ginibre ensembles, because they generalize the Ginibre ensemble—the eigenvalue process of random (normal) matrices with Gaussian weight. There is a physical interpretation in terms of a system of free fermions in a uniform magnetic field so that a fixed number of the first Landau levels have been filled. We consider local blow-ups of the polyanalytic Ginibre ensembles around points in the spectral droplet, which is here the closed unit disk . We obtain asymptotics for the blow-up process, using a blow-up to characteristic distance m -1/2; the typical distance is the same both for interior and for boundary points of . This amounts to obtaining the asymptotical behavior of the generating kernel K m, n, q . Following (Ameur et al. in Commun. Pure Appl. Math. 63(12):1533-1584, 2010), the asymptotics of the K m, n, q are rather conveniently expressed in terms of the Berezin measure (and density) [Equation not available: see fulltext.] For interior points | z|<1, we obtain that in the weak-star sense, where δ z denotes the unit point mass at z. Moreover, if we blow up to the scale of m -1/2 around z, we get convergence to a measure which is Gaussian for q=1, but exhibits more complicated Fresnel zone behavior for q>1. In contrast, for exterior points | z|>1, we have instead that , where is the harmonic measure at z with respect to the exterior disk . For boundary points, | z|=1, the Berezin measure converges to the unit point mass at z, as with interior points, but the blow-up to the scale m -1/2 exhibits quite different behavior at boundary points compared with interior points. We also obtain the asymptotic boundary behavior of the 1-point function at the coarser local scale q 1/2 m -1/2.

  13. Stochastic series expansion simulation of the t -V model

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Troyer, Matthias

    2016-04-01

    We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.

  14. Critical configurations (determinantal loci) for range and range difference satellite networks

    NASA Technical Reports Server (NTRS)

    Tsimis, E.

    1973-01-01

    The observational modes of Geometric Satellite Geodesy are discussed. The geometrical analysis of the problem yielded a regression model for the adjustment of the observations along with a suitable and convenient metric for the least-squares criterion. The determinantal loci (critical configurations) for range networks are analyzed. An attempt is made to apply elements of the theory of variants for this purpose. The use of continuously measured range differences for loci determination is proposed.

  15. Basic Research in the Mathematical Foundations of Stability Theory, Control Theory and Numerical Linear Algebra.

    DTIC Science & Technology

    1979-09-01

    without determinantal divisors, Linear and Multilinear Algebra 7(1979), 107-109. 4. The use of integral operators in number theory (with C. Ryavec and...Gersgorin revisited, to appear in Letters in Linear Algebra. 15. A surprising determinantal inequality for real matrices (with C.R. Johnson), to appear in...Analysis: An Essay Concerning the Limitations of Some Mathematical Methods in the Social , Political and Biological Sciences, David Berlinski, MIT Press

  16. Data on green tea flavor determinantes as affected by cultivars and manufacturing processes.

    PubMed

    Han, Zhuo-Xiao; Rana, Mohammad M; Liu, Guo-Feng; Gao, Ming-Jun; Li, Da-Xiang; Wu, Fu-Guang; Li, Xin-Bao; Wan, Xiao-Chun; Wei, Shu

    2017-02-01

    This paper presents data related to an article entitled "Green tea flavor determinants and their changes over manufacturing processes" (Han et al., 2016) [1]. Green tea samples were prepared with steaming and pan firing treatments from the tender leaves of tea cultivars 'Bai-Sang Cha' ('BAS') and 'Fuding-Dabai Cha' ('FUD'). Aroma compounds from the tea infusions were detected and quantified using HS-SPME coupled with GC/MS. Sensory evaluation was also made for characteristic tea flavor. The data shows the abundances of the detected aroma compounds, their threshold values and odor characteristics in the two differently processed tea samples as well as two different cultivars.

  17. Análisis de los determinantes socioeconómicos del gasto de bolsillo en medicamentos en seis zonas geográficas de Panamá.

    PubMed

    Herrera-Ballesteros, Victor H; Castro, Franz; Gómez, Beatriz

    2018-04-27

    Caracterizar el gasto de bolsillo privado en medicamentos en función de los determinantes sociodemográficos y socioeconómicos. MATERIALES Y MéTODOS: La fuente de datos es la Encuesta de Gasto de Bolsillo en Medicamentos de 2014. Se caracterizó el gasto de bolsillo privado mediante variables explicativas sociodemográficas (SOD) y socioeconómicas (SES). Se hizo análisis factorial por componentes principales, regresión logística y lineal simple. Los Odds Ratio demuestran que la educación y la zona geográfica son determinantes fundamentales que inciden en el gasto de bolsillo. Los medicamentos son productos necesarios, en adición a que el gasto de bolsillo aumenta a un promedio del 2% por cada año de vida cronológica adicional. Existe mayor vulnerabilidad en las zonas más pauperizadas respecto del acceso a medicamentos, en especial en las indígenas e implica un mayor riesgo de gasto catastrófico a menor ingreso ante la mayor prevalencia de enfermedades crónicas. Copyright © 2018. Published by Elsevier Inc.

  18. Nondynamical correlation energy in model molecular systems

    NASA Astrophysics Data System (ADS)

    Chojnacki, Henryk

    The hypersurfaces for the deprotonation processes have been studied at the nonempirical level for H3O+, NH+4, PH+4, and H3S+ cations within their correlation consistent basis set. The potential energy curves were calculated and nondynamical correlation energies analyzed. We have found that the restricted Hartree-Fock wavefunction leads to the improper dissociation limit and, in the three latest cases requires multireference description. We conclude that these systems may be treated as a good models for interpretation of the proton transfer mechanism as well as for testing one-determinantal or multireference cases.

  19. Full Counting Statistics for Interacting Fermions with Determinantal Quantum Monte Carlo Simulations.

    PubMed

    Humeniuk, Stephan; Büchler, Hans Peter

    2017-12-08

    We present a method for computing the full probability distribution function of quadratic observables such as particle number or magnetization for the Fermi-Hubbard model within the framework of determinantal quantum Monte Carlo calculations. Especially in cold atom experiments with single-site resolution, such a full counting statistics can be obtained from repeated projective measurements. We demonstrate that the full counting statistics can provide important information on the size of preformed pairs. Furthermore, we compute the full counting statistics of the staggered magnetization in the repulsive Hubbard model at half filling and find excellent agreement with recent experimental results. We show that current experiments are capable of probing the difference between the Hubbard model and the limiting Heisenberg model.

  20. Double-scaling limits of random matrices and minimal (2m, 1) models: the merging of two cuts in a degenerate case

    NASA Astrophysics Data System (ADS)

    Marchal, O.; Cafasso, M.

    2011-04-01

    In this paper, we show that the double-scaling-limit correlation functions of a random matrix model when two cuts merge with degeneracy 2m (i.e. when y ~ x2m for arbitrary values of the integer m) are the same as the determinantal formulae defined by conformal (2m, 1) models. Our approach follows the one developed by Bergère and Eynard in (2009 arXiv:0909.0854) and uses a Lax pair representation of the conformal (2m, 1) models (giving a Painlevé II integrable hierarchy) as suggested by Bleher and Eynard in (2003 J. Phys. A: Math. Gen. 36 3085). In particular we define Baker-Akhiezer functions associated with the Lax pair in order to construct a kernel which is then used to compute determinantal formulae giving the correlation functions of the double-scaling limit of a matrix model near the merging of two cuts.

  1. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  2. The Effect of Race on Determinants of Job Satisfaction.

    DTIC Science & Technology

    1985-12-01

    satisfaction was disputed by Konar. Konar claimed that Moch’s inability to demonstrate that cultural, organizational, social and social psychological factors...effect of race * as a determinant of an individual’s job satisfaction. Analysis of the effect of gender on the determinantion of job satisfaction may

  3. Modified Eddington-inspired-Born-Infeld gravity with a trace term

    DOE PAGES

    Chen, Che -Yu; Bouhmadi-Lopez, Mariam; Chen, Pisin

    2016-01-22

    In this study, a modified Eddington-inspired-Born-Infeld (EiBI) theory with a pure trace term g μνR being added to the determinantal action is analysed from a cosmological point of view. It corresponds to the most general action constructed from a rank two tensor that contains up to first order terms in curvature. This term can equally be seen as a conformal factor multiplying the metric g μν . This very interesting type of amendment has not been considered within the Palatini formalism despite the large amount of works on the Born-Infeld-inspired theory of gravity. This model can provide smooth bouncing solutionsmore » which were not allowed in the EiBI model for the same EiBI coupling. Most interestingly, for a radiation filled universe there are some regions of the parameter space that can naturally lead to a de Sitter inflationary stage without the need of any exotic matter field. Finally, in this model we discover a new type of cosmic “quasi-sudden” singularity, where the cosmic time derivative of the Hubble rate becomes very large but finite at a finite cosmic time.« less

  4. The Effects of Partner Relationship, Resource Availability, Culture, and Collectivist Tendency on Reward Allocation.

    DTIC Science & Technology

    1984-07-01

    Sozialer kontext als determinante der wahrgenommenen gerechtigkeit: Absolute und relative 91eichheit der gewinnaufteilung [ Social context as a...other-serving orientation was not reduced by the same procedure. The role of the IC construct in understanding social behaviors and cultural differences...alter, directly or indirectly, the person’s relationship with others within the social environment. Comprehensive literature reviews in this area have

  5. Classification Techniques for Multivariate Data Analysis.

    DTIC Science & Technology

    1980-03-28

    analysis among biologists, botanists, and ecologists, while some social scientists may refer "typology". Other frequently encountered terms are pattern...the determinantal equation: lB -XW 0 (42) 49 The solutions X. are the eigenvalues of the matrix W-1 B 1 as in discriminant analysis. There are t non...Statistical Package for Social Sciences (SPSS) (14) subprogram FACTOR was used for the principal components analysis. It is designed both for the factor

  6. Population attributable fraction: planning of diseases prevention actions in Brazil.

    PubMed

    Rezende, Leandro Fórnias Machado de; Eluf-Neto, José

    2016-06-10

    Epidemiology is the study of occurrence, distribution and determinants of health-related events, including the application of that knowledge to the prevention and control of health problems. However, epidemiological studies, in most cases, have limited their research questions to determinants of health outcomes. Research related to the application of knowledge for prevention and control of diseases have been neglected. In this comment, we present a description of how population attributable fraction estimates can provide important elements for planning of prevention and control of diseases in Brazil. RESUMO Epidemiologia é o estudo da ocorrência, distribuição e determinantes de eventos relacionados à saúde da população, incluindo a aplicação desse conhecimento para a prevenção e o controle dos problemas de saúde. Entretanto, estudos epidemiológicos, na maioria das vezes, têm limitado suas perguntas de pesquisa aos fatores determinantes de desfechos em saúde. Pesquisas relacionadas à aplicação do conhecimento para ações de prevenção e controle de doenças têm sido negligenciadas. Nesse comentário, apresentamos uma descrição de como as estimativas de fração atribuível populacional podem fornecer importantes elementos para planejamento de ações de prevenção e controle de doenças no Brasil.

  7. Instantons for vacuum decay at finite temperature in the thin wall limit

    NASA Astrophysics Data System (ADS)

    Garriga, Jaume

    1994-05-01

    In N+1 dimensions, false vacuum decay at zero temperature is dominated by the O(N+1)-symmetric instanton, a sphere of radius R0, whereas at temperatures T>>R-10, the decay is dominated by a ``cylindrical'' (static) O(N)-symmetric instanton. We study the transition between these two regimes in the thin wall approximation. Taking an O(N)-symmetric ansatz for the instantons, we show that for N=2 and N=3 new periodic solutions exist in a finite temperature range in the neighborhood of T~R-10. However, these solutions have a higher action than the spherical or the cylindrical one. This suggests that there is a sudden change (a first order transition) in the derivative of the nucleation rate at a certain temperature T*, when the static instanton starts dominating. For N=1, on the other hand, the new solutions are dominant and they smoothly interpolate between the zero temperature instanton and the high temperature one, so the transition is of second order. The determinantal prefactors corresponding to the ``cylindrical'' instantons are discussed, and it is pointed out that the entropic contributions from massless excitations corresponding to deformations of the domain wall give rise to an exponential enhancement of the nucleation rate for T>>R-10.

  8. Pilot Accident Potential as Related to Total Flight Experience and Recency and Frequency of Flying

    DTIC Science & Technology

    1974-09-01

    first 100 aviators on the IFARS files. Since the IFARS files are ordered by increasing social security number and the increments betvjen successive...numbers was very large, examination of the "-.ographical data leads us to believe that social security numbers had no bearing upon age, length of time in... determinantal equation lB - )Wj a 0. The eigenvectors of the matrix W IB provides the XI solutions to this equation. D. J. McCrae has developed a

  9. The discrete Toda equation revisited: dual β-Grothendieck polynomials, ultradiscretization, and static solitons

    NASA Astrophysics Data System (ADS)

    Iwao, Shinsuke; Nagai, Hidetomo

    2018-04-01

    This paper presents a study of the discrete Toda equation that was introduced in 1977. In this paper, it is proved that the determinantal solution of the discrete Toda equation, obtained via the Lax formalism, is naturally related to the dual Grothendieck polynomials, a K-theoretic generalization of the Schur polynomials. A tropical permanent solution to the ultradiscrete Toda equation is also derived. The proposed method gives a tropical algebraic representation of the static solitons. Lastly, a new cellular automaton realization of the ultradiscrete Toda equation is proposed.

  10. Disparidad en Salud: Un Fenómeno Multidimensional

    PubMed Central

    Urrutia, Maria-Teresa; Cianelli, Rosina

    2012-01-01

    La Disparidad en Salud (DS) ha llamado la atención pública desde el siglo pasado, ha sido analizada desde diversas perspectivas y enfoques incluso variados términos han sido utilizados como sinónimos pudiendo llevar a confusión e inequidades al momento de su operacionalización. Sin embargo es importante señalar que las publicaciones coinciden en que la DS es uno de las determinantes esenciales a considerar al momento de definir polĺticas públicas. El propósito de esta publicación es analizar la disparidad en salud incorporando; a) los aspectos claves de su conceptualización, b) la evolución histórica del concepto, c) las estrategias que se han generado para enfrentarla, d) los factores considerados determinantes, y e) los aspectos éticos y la contribución de la investigación en la disminución de la DS. Health Disparities (HD) have been at the center of public attention for the past century. They have been analyzed from diverse perspectives utilizing various terms as synonyms that can lead to confusion and inequality at the moment of operationalization. Despite this, it is important to indicate that publications agree that HD are essential determinants that must be considered in the definition of public policy. The objective of this publication is to analyze health disparities incorporating; (a) key aspects in their conceptualization, (b) the historic evolution of the concept, (c) strategies that have been generated to confront them, (d) determining factors, and (e) ethical aspects and the contribution of research in decreasing HD. PMID:22581053

  11. Disparidad en Salud: Un Fenómeno Multidimensional.

    PubMed

    Urrutia, Maria-Teresa; Cianelli, Rosina

    2010-03-01

    La Disparidad en Salud (DS) ha llamado la atención pública desde el siglo pasado, ha sido analizada desde diversas perspectivas y enfoques incluso variados términos han sido utilizados como sinónimos pudiendo llevar a confusión e inequidades al momento de su operacionalización. Sin embargo es importante señalar que las publicaciones coinciden en que la DS es uno de las determinantes esenciales a considerar al momento de definir polĺticas públicas. El propósito de esta publicación es analizar la disparidad en salud incorporando; a) los aspectos claves de su conceptualización, b) la evolución histórica del concepto, c) las estrategias que se han generado para enfrentarla, d) los factores considerados determinantes, y e) los aspectos éticos y la contribución de la investigación en la disminución de la DS.Health Disparities (HD) have been at the center of public attention for the past century. They have been analyzed from diverse perspectives utilizing various terms as synonyms that can lead to confusion and inequality at the moment of operationalization. Despite this, it is important to indicate that publications agree that HD are essential determinants that must be considered in the definition of public policy. The objective of this publication is to analyze health disparities incorporating; (a) key aspects in their conceptualization, (b) the historic evolution of the concept, (c) strategies that have been generated to confront them, (d) determining factors, and (e) ethical aspects and the contribution of research in decreasing HD.

  12. Painlevé equations, topological type property and reconstruction by the topological recursion

    NASA Astrophysics Data System (ADS)

    Iwaki, K.; Marchal, O.; Saenz, A.

    2018-01-01

    In this article we prove that Lax pairs associated with ħ-dependent six Painlevé equations satisfy the topological type property proposed by Bergère, Borot and Eynard for any generic choice of the monodromy parameters. Consequently we show that one can reconstruct the formal ħ-expansion of the isomonodromic τ-function and of the determinantal formulas by applying the so-called topological recursion to the spectral curve attached to the Lax pair in all six Painlevé cases. Finally we illustrate the former results with the explicit computations of the first orders of the six τ-functions.

  13. Surface-associated flagellum formation and swarming differentiation in Bacillus subtilis are controlled by the ifm locus.

    PubMed

    Senesi, Sonia; Ghelardi, Emilia; Celandroni, Francesco; Salvetti, Sara; Parisio, Eva; Galizzi, Alessandro

    2004-02-01

    Knowledge of the highly regulated processes governing the production of flagella in Bacillus subtilis is the result of several observations obtained from growing this microorganism in liquid cultures. No information is available regarding the regulation of flagellar formation in B. subtilis in response to contact with a solid surface. One of the best-characterized responses of flagellated eubacteria to surfaces is swarming motility, a coordinate cell differentiation process that allows collective movement of bacteria over solid substrates. This study describes the swarming ability of a B. subtilis hypermotile mutant harboring a mutation in the ifm locus that has long been known to affect the degree of flagellation and motility in liquid media. On solid media, the mutant produces elongated and hyperflagellated cells displaying a 10-fold increase in extracellular flagellin. In contrast to the mutant, the parental strain, as well as other laboratory strains carrying a wild-type ifm locus, fails to activate a swarm response. Furthermore, it stops to produce flagella when transferred from liquid to solid medium. Evidence is provided that the absence of flagella is due to the lack of flagellin gene expression. However, restoration of flagellin synthesis in cells overexpressing sigma(D) or carrying a deletion of flgM does not recover the ability to assemble flagella. Thus, the ifm gene plays a determinantal role in the ability of B. subtilis to contact with solid surfaces.

  14. Solution of the determinantal assignment problem using the Grassmann matrices

    NASA Astrophysics Data System (ADS)

    Karcanias, Nicos; Leventides, John

    2016-02-01

    The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.

  15. Fast Neutron Radiotherapy for Locally Advanced Prostate Cancer: Update of a Past Trial and Future Research Directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krieger, John N.; Krall, John M.; Laramore, George E.

    1987-01-01

    Between June, 1977 and April, 1983 the Radiation Therapy Oncology Group (RTOG) sponsored a Phase III study comparing fast neutron radiotherapy as part of a mixed beam (neutron/photon) regimen with conventional photon (x-ray) radiotherapy for patients with locally advanced (stages C and o1 ) adenocarcinoma of the prostate. A total of 91 analyzable patients were entered into the study with -the two treatment groups being balanced in regard to all major prognostic variables. The current analysis is for a median follow-up of 6.7 years (range 3.4-9.0). Actuarial curves are presented for local/regional control, overall survival and "determinantal" survival. The resultsmore » are statistically significant in favor of the mixed beam group for all of the above parameters. At 5 years the local control rate is 81% on the mixed beam arm compared to 60% on the photon arm. Histologic evidence of residual prostatic carcinoma was documented in six patients with no clinical evidence of disease on both treatment arms. The actuarial overall survival rate at S years is 70% on the mixed beam compared to 56% on the photon arm. The determinantal survival at 5 years was 82%. on the mixed beam arm compared to 61% on the photon arm. The type of therapy appeared to be the most important predictor of both local tumor control and patient survival in a step-wise Cox analysis. There was no difference in the treatment related morbidity for the two patient groups. Mixed beam therapy may be superior to standard photon radiotherapy for treatment of locally advanced prostate cancer.« less

  16. Quantum many-body effects in x-ray spectra efficiently computed using a basic graph algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Yufeng; Prendergast, David

    2018-05-01

    The growing interest in using x-ray spectroscopy for refined materials characterization calls for an accurate electronic-structure theory to interpret the x-ray near-edge fine structure. In this work, we propose an efficient and unified framework to describe all the many-electron processes in a Fermi liquid after a sudden perturbation (such as a core hole). This problem has been visited by the Mahan-Noziéres-De Dominicis (MND) theory, but it is intractable to implement various Feynman diagrams within first-principles calculations. Here, we adopt a nondiagrammatic approach and treat all the many-electron processes in the MND theory on an equal footing. Starting from a recently introduced determinant formalism [Phys. Rev. Lett. 118, 096402 (2017), 10.1103/PhysRevLett.118.096402], we exploit the linear dependence of determinants describing different final states involved in the spectral calculations. An elementary graph algorithm, breadth-first search, can be used to quickly identify the important determinants for shaping the spectrum, which avoids the need to evaluate a great number of vanishingly small terms. This search algorithm is performed over the tree-structure of the many-body expansion, which mimics a path-finding process. We demonstrate that the determinantal approach is computationally inexpensive even for obtaining x-ray spectra of extended systems. Using Kohn-Sham orbitals from two self-consistent fields (ground and core-excited state) as input for constructing the determinants, the calculated x-ray spectra for a number of transition metal oxides are in good agreement with experiments. Many-electron aspects beyond the Bethe-Salpeter equation, as captured by this approach, are also discussed, such as shakeup excitations and many-body wave function overlap considered in Anderson's orthogonality catastrophe.

  17. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    NASA Astrophysics Data System (ADS)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  18. Current advances on polynomial resultant formulations

    NASA Astrophysics Data System (ADS)

    Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar

    2017-08-01

    Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.

  19. Covalent bond orders and atomic valences from correlated wavefunctions

    NASA Astrophysics Data System (ADS)

    Ángyán, János G.; Rosta, Edina; Surján, Péter R.

    1999-01-01

    A comparison is made between two alternative definitions for covalent bond orders: one derived from the exchange part of the two-particle density matrix and the other expressed as the correlation of fluctuations (covariance) of the number of electrons between the atomic centers. Although these definitions lead to identical formulae for mono-determinantal SCF wavefunctions, they predict different bond orders for correlated wavefunctions. It is shown that, in this case, the fluctuation-based definition leads to slightly lower values of the bond order than does the exchange-based definition, provided one uses an appropriate space-partitioning technique like that of Bader's topological theory of atoms in a molecule; however, use of Mulliken partitioning in this context leads to unphysical behaviour. The example of H 2 is discussed in detail.

  20. Development and evaluation of spatial point process models for epidermal nerve fibers.

    PubMed

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Detecting determinism from point processes.

    PubMed

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  2. Collective strategy for facing occupational risks of a nursing team.

    PubMed

    Loro, Marli Maria; Zeitoune, Regina Célia Gollner

    2017-03-09

    To socialize an educational action through the process of group discussion and reflection, with the aim to increase the care of nursing workers in facing occupational risks. A qualitative descriptive study using the Convergent Care Research modality with nursing staff working in an emergency department of a hospital in the northwest region of the state of Rio Grande do Sul, Brazil. Data collection was carried out through educational workshops and information was processed using content analysis, resulting in two thematic categories: A look at the knowledge and practices about occupational risks in nursing; and adherence to protective measures by the nursing team against occupational risks. Twenty-four (24) workers participated in the study. When challenged to critically look at their actions, the subjects found that they relate the use of safety devices to situations in which they are aware of the patient's serological status. Subjects' interaction, involvement and co-responsibility in the health education process were determinant for their reflection on risky practices. They also had the potential to modify unsafe behaviors. Socializar uma ação educativa, por meio de um processo de discussão e reflexão em grupo, com vistas a ampliar o cuidado dos trabalhadores de enfermagem frente aos riscos ocupacionais. Estudo qualitativo, descritivo na modalidade Pesquisa Convergente Assistencial, com trabalhadores da equipe de enfermagem que atuavam no pronto atendimento de um hospital da região noroeste do estado do Rio Grande do Sul. A coleta de dados foi realizada por meio de oficinas educativas, e o tratamento das informações, por análise de conteúdo, resultando em duas categorias temáticas: Um olhar direcionado a saberes e práticas sobre riscos ocupacionais na enfermagem e Adesão às medidas de proteção pela equipe de enfermagem frente aos riscos ocupacionais. Integraram o estudo 24 trabalhadores. Ao serem desafiados a olhar criticamente sobre seu fazer, os sujeitos constataram que vinculam o uso dos dispositivos de segurança a situações em que conhecem o status sorológico do paciente. A interação, o envolvimento e a corresponsabilização dos sujeitos no processo de educação em saúde foram determinantes para a reflexão das práticas de risco, bem como tiveram potencial para modificar comportamentos inseguros. Socializar una acción educativa a través de un proceso de discusión y reflexión grupal, con el fin de ampliar el cuidado de los trabajadores de enfermería en relación a los riesgos ocupacionales. Estudio cualitativo, descriptivo en modalidad de Investigación Convergente Asistencial, con trabajadores del equipo de enfermería que actuaban en el servicio de urgencia de un hospital de la región noroeste del estado de Rio Grande del Sur. La recolección de datos se realizó a través de talleres educativos y el tratamiento de las informaciones por análisis de contenidos, lo que resultó en dos categorías temáticas: una mirada dirigida a los saberes y las prácticas relacionadas a riesgos ocupacionales y la adhesión a las medidas de protección por el equipo de enfermería para los riesgos ocupacionales. El estudio incluyó 24 trabajadores a los que cuando se presentó el desafío de observar su críticamente sus acciones, constataron que relacionan el uso de los dispositivos de seguridad a las situaciones en que el status serológico de los pacientes es ya conocido. La interacción, el involucramiento y la corresponsabilidad de los sujetos en el proceso de educación en salud son determinantes para la reflexión sobre las practicas riesgosas y tienen potencial para modificar comportamientos inseguros.

  3. Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations

    NASA Technical Reports Server (NTRS)

    Chanchio, Kasidit; Sun, Xian-He

    1996-01-01

    This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.

  4. The calculation of rare-earth levels in layered cobaltates Rx/3CoO2 (x ≦ 1)

    NASA Astrophysics Data System (ADS)

    Novák, P.; Knížek, K.; Jirák, Z.; Buršík, J.

    2015-05-01

    We studied theoretically the crystal field and Zeeman split electronic levels of trivalent rare earths that are distributed over trigonal prismatic sites of the layered Rx/3CoO2 system (closely related to sodium cobaltate NaxCoO2). The calculations were done in the whole basis of 4fn configurations (up to 3000 many-electron determinantal functions) for the ideal trigonal symmetry D3h, as well as for the reduced symmetry C2v that takes into account a more distant neighborhood of the R-sites. Detailed data on the doublet and singlet states for Pr, Nd, Sm, Tb and Dy are presented. The obtained g-factor and the van Vleck susceptibility tensor components are used for calculations of anisotropic magnetic susceptibilities and their temperature dependencies.

  5. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  6. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  7. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  8. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  9. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  10. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  11. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  12. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  13. Modeling fixation locations using spatial point processes.

    PubMed

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  14. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  15. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  16. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    NASA Astrophysics Data System (ADS)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  17. Determining the Number of Clusters in a Data Set Without Graphical Interpretation

    NASA Technical Reports Server (NTRS)

    Aguirre, Nathan S.; Davies, Misty D.

    2011-01-01

    Cluster analysis is a data mining technique that is meant ot simplify the process of classifying data points. The basic clustering process requires an input of data points and the number of clusters wanted. The clustering algorithm will then pick starting C points for the clusters, which can be either random spatial points or random data points. It then assigns each data point to the nearest C point where "nearest usually means Euclidean distance, but some algorithms use another criterion. The next step is determining whether the clustering arrangement this found is within a certain tolerance. If it falls within this tolerance, the process ends. Otherwise the C points are adjusted based on how many data points are in each cluster, and the steps repeat until the algorithm converges,

  18. Living arrangements of the elderly and the sociodemographic and health determinants: a longitudinal study.

    PubMed

    Bolina, Alisson Fernandes; Tavares, Darlene Mara Dos Santos

    2016-08-08

    to describe the sociodemographic characteristics and the number of morbidities in the elderly, according to the dynamics of living arrangements and evaluate the sociodemographic and health determinants of the living arrangements. this is a household longitudinal survey (2005-2012), carried out with 623 elderly people. Descriptive statistical analysis and multinomial logistic regression were performed (p<0.05). there was predominance of elderly living alone, accompanied and with change in the living arrangements, females, age range between 60├ 70 years, 1├ 4 years of study and with income between 1├┤ 3 minimum wages. During the development of this research, it was identified an increase in the incidence of elderly with 1├┤3 minimum wages. The number of morbidities increased in the three groups throughout the study, with the highest rates observed among the elderly with change in the dynamics of living arrangements. It was found that elderly men showed less chance of living alone (p=0.007) and having change in the living arrangements compared to women (p = 0.005). Incomes less than a minimum wage decreased the chances of change in the living arrangements compared to incomes above three salaries (p=0.034). the determining factors of the living arrangements were sex and income, and the variables functional capacity and number of morbidities were not associated with the outcome analyzed. descrever as características sociodemográficas e o número de morbidades de idosos, segundo a dinâmica do arranjo domiciliar; e verificar os determinantes sociodemográficos e de saúde do arranjo domiciliar. trata-se de uma pesquisa domiciliar e longitudinal (2005-2012), conduzida com 623 idosos. Foi realizada análise estatística descritiva e regressão logística multinomial (p<0,05). predominaram, idosos que moram sozinhos, acompanhados e com mudança do arranjo domiciliar, do sexo feminino, faixa etária 60├ 70 anos, 1├ 4 anos de estudo e com renda entre 1├┤ 3 salários mínimos. Durante o desenvolvimento dessa pesquisa, identificou-se uma elevação da distribuição de idosos com 1├┤3 salários mínimos. O número de morbidades aumentou nos três grupos ao longo do estudo, com maiores taxas entre os idosos que mudaram a dinâmica do arranjo domiciliar. Verificou-se que idosos do sexo masculino apresentaram menores chances de morar sozinhos (p=0,007) e mudar o arranjo domiciliar comparados às mulheres (p = 0,005). Ganhar menos de um salário mínimo diminuiu as chances de mudança do arranjo domiciliar em relação aos que ganham mais de três salários (p=0,034). os fatores determinantes do arranjo domiciliar foram o sexo e a renda, sendo que as variáveis capacidade funcional e número de morbidades não estiveram associadas ao desfecho analisado. describir las características sociodemográficas y el número de enfermedades concomitantes de ancianos, según la dinámica de la acomodación domiciliaria; y verificar los determinantes sociodemográficos y de salud del espacio domiciliario. se trata de una investigación domiciliaria y longitudinal (2005-2012), realiza con 623 ancianos. Fue realizado análisis estadístico descriptivo y regresión logística multinomial (p<0,05). predominaron, ancianos que viven solos, acompañados y con cambio de acomodación domiciliaria, del sexo femenino, intervalo etario 60├ 70 años, 1├ 4 años de estudio y con ingreso mensual entre 1├┤3 sueldos mínimos. Durante el desarrollo de la investigación, se identificó una aumento en la distribución de ancianos de 1├┤3 sueldos mínimos. El número de enfermedades concomitantes aumentó en los tres grupos a lo largo del estudio, con mayores tasas entre los ancianos que cambiaron la dinámica de la acomodación domiciliaria. Se verificó que ancianos del sexo masculino presentaron menores probabilidades de vivir solos (p=0,007) y de cambiar la acomodación domiciliaria, comparados a las mujeres (p = 0,005). Ganar menos de un salario mínimo disminuye las probabilidades de cambiar la acomodación domiciliaria en relación a los que ganan más de tres sueldos (p=0,034). los factores determinantes de la acomodación domiciliaria fueron el sexo y la ingreso mensual, siendo que las variables capacidad funcional y número de enfermedades concomitantes, no estuvieron asociadas al resultado analizado.

  19. Passive serialization in a multitasking environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennessey, J.P.; Osisek, D.L.; Seigh, J.W. II

    1989-02-28

    In a multiprocessing system having a control program in which data objects are shared among processes, this patent describes a method for serializing references to a data object by the processes so as to prevent invalid references to the data object by any process when an operation requiring exclusive access is performed by another process, comprising the steps of: permitting the processes to reference data objects on a shared access basis without obtaining a shared lock; monitoring a point of execution of the control program which is common to all processes in the system, which occurs regularly in the process'more » execution and across which no references to any data object can be maintained by any process, except references using locks; establishing a system reference point which occurs after each process in the system has passed the point of execution at least once since the last such system reference point; requesting an operation requiring exclusive access on a selected data object; preventing subsequent references by other processes to the selected data object; waiting until two of the system references points have occurred; and then performing the requested operation.« less

  20. Decrease of d -wave pairing strength in spite of the persistence of magnetic excitations in the overdoped Hubbard model

    DOE PAGES

    Huang, Edwin W.; Scalapino, Douglas J.; Maier, Thomas A.; ...

    2017-07-17

    Evidence for the presence of high-energy magnetic excitations in overdoped La 2–xSr xCuO 4 (LSCO) has raised questions regarding the role of spin fluctuations in the pairing mechanism. If they remain present in overdoped LSCO, why does T c decrease in this doping regime? Here, using results for the dynamic spin susceptibility Imχ(q,ω) obtained from a determinantal quantum Monte Carlo calculation for the Hubbard model, we address this question. We find that while high-energy magnetic excitations persist in the overdoped regime, they lack the momentum to scatter pairs between the antinodal regions. Finally, it is the decrease in the spectralmore » weight at large momentum transfer, not observed by resonant inelastic x-ray scattering, which leads to a reduction in the d-wave spin-fluctuation pairing strength.« less

  1. Decrease of d -wave pairing strength in spite of the persistence of magnetic excitations in the overdoped Hubbard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Edwin W.; Scalapino, Douglas J.; Maier, Thomas A.

    Evidence for the presence of high-energy magnetic excitations in overdoped La 2–xSr xCuO 4 (LSCO) has raised questions regarding the role of spin fluctuations in the pairing mechanism. If they remain present in overdoped LSCO, why does T c decrease in this doping regime? Here, using results for the dynamic spin susceptibility Imχ(q,ω) obtained from a determinantal quantum Monte Carlo calculation for the Hubbard model, we address this question. We find that while high-energy magnetic excitations persist in the overdoped regime, they lack the momentum to scatter pairs between the antinodal regions. Finally, it is the decrease in the spectralmore » weight at large momentum transfer, not observed by resonant inelastic x-ray scattering, which leads to a reduction in the d-wave spin-fluctuation pairing strength.« less

  2. Local gauge symmetry on optical lattices?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuzhi; Meurice, Yannick; Tsai, Shan-Wen

    2012-11-01

    The versatile technology of cold atoms confined in optical lattices allows the creation of a vast number of lattice geometries and interactions, providing a promising platform for emulating various lattice models. This opens the possibility of letting nature take care of sign problems and real time evolution in carefully prepared situations. Up to now, experimentalists have succeeded to implement several types of Hubbard models considered by condensed matter theorists. In this proceeding, we discuss the possibility of extending this effort to lattice gauge theory. We report recent efforts to establish the strong coupling equivalence between the Fermi Hubbard model andmore » SU(2) pure gauge theory in 2+1 dimensions by standard determinantal methods developed by Robert Sugar and collaborators. We discuss the possibility of using dipolar molecules and external fields to build models where the equivalence holds beyond the leading order in the strong coupling expansion.« less

  3. Toward {U}(N|M) knot invariant from ABJM theory

    NASA Astrophysics Data System (ADS)

    Eynard, Bertrand; Kimura, Taro

    2017-06-01

    We study {U}(N|M) character expectation value with the supermatrix Chern-Simons theory, known as the ABJM matrix model, with emphasis on its connection to the knot invariant. This average just gives the half-BPS circular Wilson loop expectation value in ABJM theory, which shall correspond to the unknot invariant. We derive the determinantal formula, which gives {U}(N|M) character expectation values in terms of {U}(1|1) averages for a particular type of character representations. This means that the {U}(1|1) character expectation value is a building block for the {U}(N|M) averages and also, by an appropriate limit, for the {U}(N) invariants. In addition to the original model, we introduce another supermatrix model obtained through the symplectic transform, which is motivated by the torus knot Chern-Simons matrix model. We obtain the Rosso-Jones-type formula and the spectral curve for this case.

  4. A Point-process Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation

    PubMed Central

    Luo, X.; Gee, S.; Sohal, V.; Small, D.

    2015-01-01

    Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process (neuronal spikes) while the input is another high frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, Point-process Responses for Optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the- curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. PMID:26411923

  5. Research in Stochastic Processes

    DTIC Science & Technology

    1988-08-31

    stationary sequence, Stochastic Proc. Appl. 29, 1988, 155-169 T. Hsing, J. Husler and M.R. Leadbetter, On the exceedance point process for a stationary...Nandagopalan, On exceedance point processes for "regular" sample functions, Proc. Volume, Oberxolfach Conf. on Extreme Value Theory, J. Husler and R. Reiss...exceedance point processes for stationary sequences under mild oscillation restrictions, Apr. 88. Obermotfach Conf. on Extremal Value Theory. Ed. J. HUsler

  6. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  7. The application of hazard analysis and critical control points and risk management in the preparation of anti-cancer drugs.

    PubMed

    Bonan, Brigitte; Martelli, Nicolas; Berhoune, Malik; Maestroni, Marie-Laure; Havard, Laurent; Prognon, Patrice

    2009-02-01

    To apply the Hazard analysis and Critical Control Points method to the preparation of anti-cancer drugs. To identify critical control points in our cancer chemotherapy process and to propose control measures and corrective actions to manage these processes. The Hazard Analysis and Critical Control Points application began in January 2004 in our centralized chemotherapy compounding unit. From October 2004 to August 2005, monitoring of the process nonconformities was performed to assess the method. According to the Hazard Analysis and Critical Control Points method, a multidisciplinary team was formed to describe and assess the cancer chemotherapy process. This team listed all of the critical points and calculated their risk indexes according to their frequency of occurrence, their severity and their detectability. The team defined monitoring, control measures and corrective actions for each identified risk. Finally, over a 10-month period, pharmacists reported each non-conformity of the process in a follow-up document. Our team described 11 steps in the cancer chemotherapy process. The team identified 39 critical control points, including 11 of higher importance with a high-risk index. Over 10 months, 16,647 preparations were performed; 1225 nonconformities were reported during this same period. The Hazard Analysis and Critical Control Points method is relevant when it is used to target a specific process such as the preparation of anti-cancer drugs. This method helped us to focus on the production steps, which can have a critical influence on product quality, and led us to improve our process.

  8. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics (“order parameters”) inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. PMID:28336305

  9. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining.

    PubMed

    Truccolo, Wilson

    2016-11-01

    This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles. Published by Elsevier Ltd.

  10. Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Jin, Guanghu; Dong, Zhen

    2018-04-01

    Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.

  11. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  12. Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data.

    PubMed

    van Maanen, Leendert; Couto, Joaquina; Lebreton, Mael

    2016-01-01

    The notion of "mixtures" has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied-for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes.

  13. Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data

    PubMed Central

    Couto, Joaquina; Lebreton, Mael

    2016-01-01

    The notion of “mixtures” has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied–for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes. PMID:27893868

  14. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array—Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-01-01

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813

  15. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    PubMed

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  16. Edit distance for marked point processes revisited: An implementation by binary integer programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito; Aihara, Kazuyuki

    2015-12-15

    We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process ismore » large.« less

  17. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  18. Linear and quadratic models of point process systems: contributions of patterned input to output.

    PubMed

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  20. Method and system for data clustering for very large databases

    NASA Technical Reports Server (NTRS)

    Livny, Miron (Inventor); Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor)

    1998-01-01

    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.

  1. Numerical stabilization of entanglement computation in auxiliary-field quantum Monte Carlo simulations of interacting many-fermion systems.

    PubMed

    Broecker, Peter; Trebst, Simon

    2016-12-01

    In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.

  2. WKB solutions of difference equations and reconstruction by the topological recursion

    NASA Astrophysics Data System (ADS)

    Marchal, Olivier

    2018-01-01

    The purpose of this article is to analyze the connection between Eynard-Orantin topological recursion and formal WKB solutions of a \\hbar -difference equation: \\Psi(x+\\hbar)=≤ft(e\\hbar\\fracd{dx}\\right) \\Psi(x)=L(x;\\hbar)\\Psi(x) with L(x;\\hbar)\\in GL_2( ({C}(x))[\\hbar]) . In particular, we extend the notion of determinantal formulas and topological type property proposed for formal WKB solutions of \\hbar -differential systems to this setting. We apply our results to a specific \\hbar -difference system associated to the quantum curve of the Gromov-Witten invariants of {P}1 for which we are able to prove that the correlation functions are reconstructed from the Eynard-Orantin differentials computed from the topological recursion applied to the spectral curve y=\\cosh-1\\frac{x}{2} . Finally, identifying the large x expansion of the correlation functions, proves a recent conjecture made by Dubrovin and Yang regarding a new generating series for Gromov-Witten invariants of {P}1 .

  3. Method for cold stable biojet fuel

    DOEpatents

    Seames, Wayne S.; Aulich, Ted

    2015-12-08

    Plant or animal oils are processed to produce a fuel that operates at very cold temperatures and is suitable as an aviation turbine fuel, a diesel fuel, a fuel blendstock, or any fuel having a low cloud point, pour point or freeze point. The process is based on the cracking of plant or animal oils or their associated esters, known as biodiesel, to generate lighter chemical compounds that have substantially lower cloud, pour, and/or freeze points than the original oil or biodiesel. Cracked oil is processed using separation steps together with analysis to collect fractions with desired low temperature properties by removing undesirable compounds that do not possess the desired temperature properties.

  4. Efficient Open Source Lidar for Desktop Users

    NASA Astrophysics Data System (ADS)

    Flanagan, Jacob P.

    Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires implementing batch coding and processing time that could take months for a standard lidar dataset. This project attempts to build a software with the best approach for creating, importing and exporting, manipulating and processing lidar, especially in the environmental field. Development of this software is described in 3 sections - (1) explanation of the search methods for efficiently extracting the "area of interest" (AOI) data from disk (file space), (2) using file space (for storage), budgeting memory space (for efficient processing) and moving between the two, and (3) method development for creating lidar products (usually raster based) used in environmental modeling and analysis (i.e.: hydrology feature extraction, geomorphological studies, ecology modeling, etc.).

  5. Experiments To Demonstrate Chemical Process Safety Principles.

    ERIC Educational Resources Information Center

    Dorathy, Brian D.; Mooers, Jamisue A.; Warren, Matthew M.; Mich, Jennifer L.; Murhammer, David W.

    2001-01-01

    Points out the need to educate undergraduate chemical engineering students on chemical process safety and introduces the content of a chemical process safety course offered at the University of Iowa. Presents laboratory experiments demonstrating flammability limits, flash points, electrostatic, runaway reactions, explosions, and relief design.…

  6. Nitrogen management in landfill leachate: Application of SHARON, ANAMMOX and combined SHARON-ANAMMOX process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sri Shalini, S., E-mail: srishalini10@gmail.com; Joseph, Kurian, E-mail: kuttiani@gmail.com

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Significant research on ammonia removal from leachate by SHARON and ANAMMOX process. Black-Right-Pointing-Pointer Operational parameters, microbiology, biochemistry and application of the process. Black-Right-Pointing-Pointer SHARON-ANAMMOX process for leachate a new research and this paper gives wide facts. Black-Right-Pointing-Pointer Cost-effective process, alternative to existing technologies for leachate treatment. Black-Right-Pointing-Pointer Address the issues and operational conditions for application in leachate treatment. - Abstract: In today's context of waste management, landfilling of Municipal Solid Waste (MSW) is considered to be one of the standard practices worldwide. Leachate generated from municipal landfills has become a great threat to the surroundings as it containsmore » high concentration of organics, ammonia and other toxic pollutants. Emphasis has to be placed on the removal of ammonia nitrogen in particular, derived from the nitrogen content of the MSW and it is a long term pollution problem in landfills which determines when the landfill can be considered stable. Several biological processes are available for the removal of ammonia but novel processes such as the Single Reactor System for High Activity Ammonia Removal over Nitrite (SHARON) and Anaerobic Ammonium Oxidation (ANAMMOX) process have great potential and several advantages over conventional processes. The combined SHARON-ANAMMOX process for municipal landfill leachate treatment is a new, innovative and significant approach that requires more research to identify and solve critical issues. This review addresses the operational parameters, microbiology, biochemistry and application of both the processes to remove ammonia from leachate.« less

  7. Intensity of geomorphological processes in NW sector of Pacific rim marginal mountain belts

    NASA Astrophysics Data System (ADS)

    Lebedeva, Ekaterina; Shvarev, Sergey; Gotvansky, Veniamin

    2014-05-01

    Continental marginal mountains, including the mountain belts of Russian Far East, are characterized by supreme terrain contrast, mosaic structure of surface and crust, and rich complex of modern endogenous processes - volcanism, seismicity, and vertical movements. Unstable state of geomorphological systems and activity of relief forming processes here is caused also by deep dissected topography and the type and amount of precipitation. Human activities further stimulate natural processes and increase the risk of local disasters. So these territories have high intensity (or tension) of geomorphological processes. Intensity in the authors' understanding is willingness of geomorphological system to be out of balance, risk of disaster under external and internal agent, both natural and human. Mapping with quantitative accounting of intensity of natural and human potential impact is necessary for indication the areal distribution trends of geomorphological processes intensity and zones of potential risk of disasters. Methods of map drowning up are based on several criteria analyzing: 1) total terrain-form processes and their willingness to be a hazard-like, 2) existence, peculiarity and zoning of external agents which could cause extreme character of base processes within the territory, 3) peculiarity of terrain morphology which could cause hazard way of terrain-form processes. Seismic activity is one of the most important factors causing activation of geomorphological processes and contributing to the risk of dangerous situations. Earthquake even small force can provoke many catastrophic processes: landslides, mudslides, avalanches and mudflows, tsunami and others. Seismic gravitational phenomenons of different scale accompany almost all earthquakes of intensity 7-8 points and above, and some processes, such as avalanches, activated by seismic shocks intensity about 1-3 points. In this regard, we consider it important selection of high intensity seismic zones in marginal-continental mountain systems and also offer to give them extra points of tension, the number of which increases depending on the strength of the shock. Such approach allows to identify clearly the most potentially hazardous areas where there may be various, sometimes unpredictable scale catastrophic processes, provoked intense underground tremors. We also consider the impact of the depth of topography dissection and the total amount of precipitation. The marginal-continental mountain systems have often radically different moistening of coastal and inland slopes. And this difference can be 500, 1000 mm and more, that, undoubtedly, affects the course and intensity of geomorphological processes on slopes of different exposures. The total evaluation of intensity of geomorphologic processes exceeding 15 points is considered to be potentially catastrophic. At 10-15 points tension geomorphologic processes is extremely high, and at 5-10 points - high, less than 5 points - low. The maps of the key areas of the Russian Far East - Kamchatka and the north of Kuril Islands, Sakhalin and the Western Okhotsk region were compiled. These areas have differences in geodynamic regimes, landscape-climatic and anthropogenic conditions and highly significant in relation to the differentiated estimation of geomorphologic tension. The growth of intensity of geomorphological processes toward the Pacific Ocean was recorded: from 7-10 points in Western Okhotsk region to 10-13 at Sakhalin and to 13-15 points for Kamchatka.

  8. Wigner surmises and the two-dimensional homogeneous Poisson point process.

    PubMed

    Sakhr, Jamal; Nieminen, John M

    2006-04-01

    We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.

  9. "Getting the Point" of Literature: Relations between Processing and Interpretation

    ERIC Educational Resources Information Center

    Burkett, Candice; Goldman, Susan R.

    2016-01-01

    Comparisons of literary experts and novices indicate that experts engage in interpretive processes to "get the point" during their reading of literary texts but novices do not. In two studies the reading and interpretive processes of literary novices (undergraduates with no formal training in literature study) were elicited through…

  10. 36 CFR 1010.5 - Major decision points.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-making process. Most Trust projects have three distinct stages in the decision-making process: (1... stage. (b) Environmental review will be integrated into the decision-making process of the Trust as... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Major decision points. 1010.5...

  11. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features.

    PubMed

    He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-08-11

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.

  12. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features

    PubMed Central

    Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-01-01

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096

  13. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  14. Waiting Points in Nova and X-ray Burst Nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunayama, Tomomi; Smith, Michael Scott; Lingerfelt, Eric J

    2008-01-01

    In nova and X-ray burst nucleosynthesis, waiting points are nuclei in the reaction path which interrupt the nuclear flow towards heavier nuclei, typically because of a weak proton capture reaction and a long beta+ lifetime. Waiting points can influence the energy generation and final abundances synthesized in these explosions. We have constructed a systematic, quantitative set of criteria to identify rp-process waiting points, and use them to search for waiting points in post-processing simulations of novae and X-ray bursts. These criteria have been incorporated into the Computational Infrastructure for Nuclear Astrophysics, online at nucastrodata.org, to enable anyone to run customizedmore » searches for waiting points.« less

  15. Waiting Points in Nova and X-ray burst Nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunayama, Tomomi; Oak Ridge Institute for Science Education, Oak Ridge, Tennessee 37831-0117; Smith, Michael S.

    2008-05-21

    In nova and X-ray burst nucleosynthesis, waiting points are nuclei in the reaction path which delay the nuclear flow towards heavier nuclei, typically because of a weak proton capture reaction and a long {beta}{sup +} lifetime. Waiting points can influence the energy generation and final abundances synthesized in these explosions. We have constructed a systematic, quantitative set of criteria to identify rp-process waiting points, and use them to search for waiting points in post-processing simulations of novae and X-ray bursts. These criteria have been incorporated into the Computational Infrastructure for Nuclear Astrophysics, online at nucastrodata.org, to enable anyone to runmore » customized searches for waiting points.« less

  16. Two stage fluid bed-plasma gasification process for solid waste valorisation: Technical review and preliminary thermodynamic modelling of sulphur emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrin, Shane, E-mail: shane.morrin@ucl.ac.uk; Advanced Plasma Power, South Marston Business park, Swindon, SN3 4DE; Lettieri, Paola, E-mail: p.lettieri@ucl.ac.uk

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer We investigate sulphur during MSW gasification within a fluid bed-plasma process. Black-Right-Pointing-Pointer We review the literature on the feed, sulphur and process principles therein. Black-Right-Pointing-Pointer The need for research in this area was identified. Black-Right-Pointing-Pointer We perform thermodynamic modelling of the fluid bed stage. Black-Right-Pointing-Pointer Initial findings indicate the prominence of solid phase sulphur. - Abstract: Gasification of solid waste for energy has significant potential given an abundant feed supply and strong policy drivers. Nonetheless, significant ambiguities in the knowledge base are apparent. Consequently this study investigates sulphur mechanisms within a novel two stage fluid bed-plasma gasification process.more » This paper includes a detailed review of gasification and plasma fundamentals in relation to the specific process, along with insight on MSW based feedstock properties and sulphur pollutant therein. As a first step to understanding sulphur partitioning and speciation within the process, thermodynamic modelling of the fluid bed stage has been performed. Preliminary findings, supported by plant experience, indicate the prominence of solid phase sulphur species (as opposed to H{sub 2}S) - Na and K based species in particular. Work is underway to further investigate and validate this.« less

  17. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  18. Discrimination of shot-noise-driven Poisson processes by external dead time - Application of radioluminescence from glass

    NASA Technical Reports Server (NTRS)

    Saleh, B. E. A.; Tavolacci, J. T.; Teich, M. C.

    1981-01-01

    Ways in which dead time can be used to constructively enhance or diminish the effects of point processes that display bunching in the shot-noise-driven doubly stochastic Poisson point process (SNDP) are discussed. Interrelations between photocount bunching arising in the SNDP and the antibunching character arising from dead-time effects are investigated. It is demonstrated that the dead-time-modified count mean and variance for an arbitrary doubly stochastic Poisson point process can be obtained from the Laplace transform of the single-fold and joint-moment-generating functions for the driving rate process. The theory is in good agreement with experimental values for radioluminescence radiation in fused silica, quartz, and glass, and the process has many applications in pulse, particle, and photon detection.

  19. 40 CFR 29.9 - How does the Administrator receive and respond to comments?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... State office or official is designated to act as a single point of contact between a State process and... program selected under § 29.6. (b) The single point of contact is not obligated to transmit comments from.... However, if a State process recommendation is transmitted by a single point of contact, all comments from...

  20. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  1. Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System

    PubMed Central

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-01-01

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340

  2. Data processing and quality evaluation of a boat-based mobile laser scanning system.

    PubMed

    Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri

    2013-09-17

    Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.

  3. Software for Verifying Image-Correlation Tie Points

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Yagi, Gary

    2008-01-01

    A computer program enables assessment of the quality of tie points in the image-correlation processes of the software described in the immediately preceding article. Tie points are computed in mappings between corresponding pixels in the left and right images of a stereoscopic pair. The mappings are sometimes not perfect because image data can be noisy and parallax can cause some points to appear in one image but not the other. The present computer program relies on the availability of a left- right correlation map in addition to the usual right left correlation map. The additional map must be generated, which doubles the processing time. Such increased time can now be afforded in the data-processing pipeline, since the time for map generation is now reduced from about 60 to 3 minutes by the parallelization discussed in the previous article. Parallel cluster processing time, therefore, enabled this better science result. The first mapping is typically from a point (denoted by coordinates x,y) in the left image to a point (x',y') in the right image. The second mapping is from (x',y') in the right image to some point (x",y") in the left image. If (x,y) and(x",y") are identical, then the mapping is considered perfect. The perfect-match criterion can be relaxed by introducing an error window that admits of round-off error and a small amount of noise. The mapping procedure can be repeated until all points in each image not connected to points in the other image are eliminated, so that what remains are verified correlation data.

  4. Effect of processing conditions on oil point pressure of moringa oleifera seed.

    PubMed

    Aviara, N A; Musa, W B; Owolarafe, O K; Ogunsina, B S; Oluwole, F A

    2015-07-01

    Seed oil expression is an important economic venture in rural Nigeria. The traditional techniques of carrying out the operation is not only energy sapping and time consuming but also wasteful. In order to reduce the tedium involved in the expression of oil from moringa oleifera seed and develop efficient equipment for carrying out the operation, the oil point pressure of the seed was determined under different processing conditions using a laboratory press. The processing conditions employed were moisture content (4.78, 6.00, 8.00 and 10.00 % wet basis), heating temperature (50, 70, 85 and 100 °C) and heating time (15, 20, 25 and 30 min). Results showed that the oil point pressure increased with increase in seed moisture content, but decreased with increase in heating temperature and heating time within the above ranges. Highest oil point pressure value of 1.1239 MPa was obtained at the processing conditions of 10.00 % moisture content, 50 °C heating temperature and 15 min heating time. The lowest oil point pressure obtained was 0.3164 MPa and it occurred at the moisture content of 4.78 %, heating temperature of 100 °C and heating time of 30 min. Analysis of Variance (ANOVA) showed that all the processing variables and their interactions had significant effect on the oil point pressure of moringa oleifera seed at 1 % level of significance. This was further demonstrated using Response Surface Methodology (RSM). Tukey's test and Duncan's Multiple Range Analysis successfully separated the means and a multiple regression equation was used to express the relationship existing between the oil point pressure of moringa oleifera seed and its moisture content, processing temperature, heating time and their interactions. The model yielded coefficients that enabled the oil point pressure of the seed to be predicted with very high coefficient of determination.

  5. Pointo - a Low Cost Solution to Point Cloud Processing

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.

  6. Non-hoop winding effect on bonding temperature of laser assisted tape winding process

    NASA Astrophysics Data System (ADS)

    Zaami, Amin; Baran, Ismet; Akkerman, Remko

    2018-05-01

    One of the advanced methods for production of thermoplastic composite methods is laser assisted tape winding (LATW). Predicting the temperature in LATW process is very important since the temperature at nip-point (bonding line through width) plays a pivotal role in a proper bonding and hence the mechanical performance. Despite the hoop-winding where the nip-point is the straight line, non-hoop winding includes a curved nip-point line. Hence, the non-hoop winding causes somewhat a different power input through laser-rays and-reflections and consequently generates unknown complex temperature profile on the curved nip-point line. Investigating the temperature at the nip-point line is the point of interest in this study. In order to understand this effect, a numerical model is proposed to capture the effect of laser-rays and their reflections on the nip-point temperature. To this end, a 3D optical model considering the objects in LATW process is considered. Then, the power distribution (absorption and reflection) from the optical analysis is used as an input (heat flux distribution) for the thermal analysis. The thermal analysis employs a fully-implicit advection-diffusion model to calculate the temperature on the surfaces. The results are examined to demonstrate the effect of winding direction on the curved nip-point line (tape width) which has not been considered in literature up to now. Furthermore, the results can be used for designing a better and more efficient setup in the LATW process.

  7. Vanishing Point Extraction and Refinement for Robust Camera Calibration

    PubMed Central

    Tsai, Fuan

    2017-01-01

    This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966

  8. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  9. The Signer and the Sign: Cortical Correlates of Person Identity and Language Processing from Point-Light Displays

    ERIC Educational Resources Information Center

    Campbell, Ruth; Capek, Cheryl M.; Gazarian, Karine; MacSweeney, Mairead; Woll, Bencie; David, Anthony S.; McGuire, Philip K.; Brammer, Michael J.

    2011-01-01

    In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. Many of the regions activated by point-light under these…

  10. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  11. Quantitative naturalistic methods for detecting change points in psychotherapy research: an illustration with alliance ruptures.

    PubMed

    Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher

    2012-01-01

    Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.

  12. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  13. Random Walks in a One-Dimensional Lévy Random Environment

    NASA Astrophysics Data System (ADS)

    Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena

    2016-04-01

    We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.

  14. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  15. An analysis of neural receptive field plasticity by point process adaptive filtering

    PubMed Central

    Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor

    2001-01-01

    Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043

  16. Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio

    2017-03-01

    It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.

  17. Comparison of Point Matching Techniques for Road Network Matching

    NASA Astrophysics Data System (ADS)

    Hackeloeer, A.; Klasing, K.; Krisp, J. M.; Meng, L.

    2013-05-01

    Map conflation investigates the unique identification of geographical entities across different maps depicting the same geographic region. It involves a matching process which aims to find commonalities between geographic features. A specific subdomain of conflation called Road Network Matching establishes correspondences between road networks of different maps on multiple layers of abstraction, ranging from elementary point locations to high-level structures such as road segments or even subgraphs derived from the induced graph of a road network. The process of identifying points located on different maps by means of geometrical, topological and semantical information is called point matching. This paper provides an overview of various techniques for point matching, which is a fundamental requirement for subsequent matching steps focusing on complex high-level entities in geospatial networks. Common point matching approaches as well as certain combinations of these are described, classified and evaluated. Furthermore, a novel similarity metric called the Exact Angular Index is introduced, which considers both topological and geometrical aspects. The results offer a basis for further research on a bottom-up matching process for complex map features, which must rely upon findings derived from suitable point matching algorithms. In the context of Road Network Matching, reliable point matches provide an immediate starting point for finding matches between line segments describing the geometry and topology of road networks, which may in turn be used for performing a structural high-level matching on the network level.

  18. Global form and motion processing in healthy ageing.

    PubMed

    Agnew, Hannah C; Phillips, Louise H; Pilz, Karin S

    2016-05-01

    The ability to perceive biological motion has been shown to deteriorate with age, and it is assumed that older adults rely more on the global form than local motion information when processing point-light walkers. Further, it has been suggested that biological motion processing in ageing is related to a form-based global processing bias. Here, we investigated the relationship between older adults' preference for form information when processing point-light actions and an age-related form-based global processing bias. In a first task, we asked older (>60years) and younger adults (19-23years) to sequentially match three different point-light actions; normal actions that contained local motion and global form information, scrambled actions that contained primarily local motion information, and random-position actions that contained primarily global form information. Both age groups overall performed above chance in all three conditions, and were more accurate for actions that contained global form information. For random-position actions, older adults were less accurate than younger adults but there was no age-difference for normal or scrambled actions. These results indicate that both age groups rely more on global form than local motion to match point-light actions, but can use local motion on its own to match point-light actions. In a second task, we investigated form-based global processing biases using the Navon task. In general, participants were better at discriminating the local letters but faster at discriminating global letters. Correlations showed that there was no significant linear relationship between performance in the Navon task and biological motion processing, which suggests that processing biases in form- and motion-based tasks are unrelated. Copyright © 2016. Published by Elsevier B.V.

  19. Process and apparatus for measuring degree of polarization and angle of major axis of polarized beam of light

    DOEpatents

    Decker, Derek E.; Toeppen, John S.

    1994-01-01

    Apparatus and process are disclosed for calibrating measurements of the phase of the polarization of a polarized beam and the angle of the polarized optical beam's major axis of polarization at a diagnostic point with measurements of the same parameters at a point of interest along the polarized beam path prior to the diagnostic point. The process is carried out by measuring the phase angle of the polarization of the beam and angle of the major axis at the point of interest, using a rotatable polarizer and a detector, and then measuring these parameters again at a diagnostic point where a compensation apparatus, including a partial polarizer, which may comprise a stack of glass plates, is disposed normal to the beam path between a rotatable polarizer and a detector. The partial polarizer is then rotated both normal to the beam path and around the axis of the beam path until the detected phase of the beam polarization equals the phase measured at the point of interest. The rotatable polarizer at the diagnostic point may then be rotated manually to determine the angle of the major axis of the beam and this is compared with the measured angle of the major axis of the beam at the point of interest during calibration. Thereafter, changes in the polarization phase, and in the angle of the major axis, at the point of interest can be monitored by measuring the changes in these same parameters at the diagnostic point.

  20. Digital analyzer for point processes based on first-in-first-out memories

    NASA Astrophysics Data System (ADS)

    Basano, Lorenzo; Ottonello, Pasquale; Schiavi, Enore

    1992-06-01

    We present an entirely new version of a multipurpose instrument designed for the statistical analysis of point processes, especially those characterized by high bunching. A long sequence of pulses can be recorded in the RAM bank of a personal computer via a suitably designed front end which employs a pair of first-in-first-out (FIFO) memories; these allow one to build an analyzer that, besides being simpler from the electronic point of view, is capable of sustaining much higher intensity fluctuations of the point process. The overflow risk of the device is evaluated by treating the FIFO pair as a queueing system. The apparatus was tested using both a deterministic signal and a sequence of photoelectrons obtained from laser light scattered by random surfaces.

  1. In honour of N. Yngve Öhrn: surveying proton cancer therapy reactions with Öhrn's electron nuclear dynamics method. Aqueous clusters radiolysis and DNA-base damage by proton collisions

    NASA Astrophysics Data System (ADS)

    Mclaurin, Patrick M.; Privett, Austin J.; Stopera, Christopher; Grimes, Thomas V.; Perera, Ajith; Morales, Jorge A.

    2015-02-01

    Proton cancer therapy (PCT) utilises high-energy H+ projectiles to cure cancer. PCT healing arises from its DNA damage in cancerous cells, which is mostly inflicted by the products from PCT water radiolysis reactions. While clinically established, a complete microscopic understanding of PCT remains elusive. To help in the microscopic elucidation of PCT, Professor Öhrn's simplest-level electron nuclear dynamics (SLEND) method is herein applied to H+ + (H2O)3-4 and H+ + DNA-bases at ELab = 1.0 keV. These are two types of computationally feasible prototypes to study water radiolysis reactions and H+-induced DNA damage, respectively. SLEND is a time-dependent, variational, non-adiabatic and direct-dynamics method that adopts a nuclear classical-mechanics description and an electronic single-determinantal wavefunction. Additionally, our SLEND + effective-core-potential method is herein employed to simulate some computationally demanding PCT reactions. Due to these attributes, SLEND proves appropriate for the simulation of various types of PCT reactions accurately and feasibly. H+ + (H2O)3-4 simulations reveal two main processes: H+ projectile scattering and the simultaneous formation of H and OH fragments; the latter process is quantified through total integrals cross sections. H+ + DNA-base simulations reveal atoms and groups displacements, ring openings and base-to-proton electron transfers as predominant damage processes. The authors warmly dedicate this SLEND investigation in honour of Professor N. Yngve Öhrn on the occasion of his 80th birthday celebration during the 54th Sanibel Symposium in St. Simons' Island, Georgia, on February 16-21, 2014. Associate Professor Jorge A. Morales was a former chemistry PhD student under the mentorship of Professor Öhrn and Dr Ajith Perera took various quantum chemistry courses taught by Professor Öhrn during his chemistry PhD studies. Both Jorge and Ajith look back to those great times of their scientific formation under Yngve's guidance during the 1990s with a strong sense of gratitude toward him (and even with a sense of nostalgia). The authors are pleased to present to Professor Öhrn this birthday gift of fully mature SLEND developments that now venture to treat systems of biochemical interest.

  2. Point-point and point-line moving-window correlation spectroscopy and its applications

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu

    2008-07-01

    In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.

  3. Critical point analysis of phase envelope diagram

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soetikno, Darmadi; Siagian, Ucok W. R.; Kusdiantara, Rudy, E-mail: rkusdiantara@s.itb.ac.id

    2014-03-24

    Phase diagram or phase envelope is a relation between temperature and pressure that shows the condition of equilibria between the different phases of chemical compounds, mixture of compounds, and solutions. Phase diagram is an important issue in chemical thermodynamics and hydrocarbon reservoir. It is very useful for process simulation, hydrocarbon reactor design, and petroleum engineering studies. It is constructed from the bubble line, dew line, and critical point. Bubble line and dew line are composed of bubble points and dew points, respectively. Bubble point is the first point at which the gas is formed when a liquid is heated. Meanwhile,more » dew point is the first point where the liquid is formed when the gas is cooled. Critical point is the point where all of the properties of gases and liquids are equal, such as temperature, pressure, amount of substance, and others. Critical point is very useful in fuel processing and dissolution of certain chemicals. Here in this paper, we will show the critical point analytically. Then, it will be compared with numerical calculations of Peng-Robinson equation by using Newton-Raphson method. As case studies, several hydrocarbon mixtures are simulated using by Matlab.« less

  4. Strategy for Texture Management in Metals Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  5. Strategy for Texture Management in Metals Additive Manufacturing

    DOE PAGES

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.; ...

    2017-01-31

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  6. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  7. Navigable points estimation for mobile robots using binary image skeletonization

    NASA Astrophysics Data System (ADS)

    Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman

    2017-02-01

    This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.

  8. The Use of Uas for Rapid 3d Mapping in Geomatics Education

    NASA Astrophysics Data System (ADS)

    Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan

    2016-06-01

    With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.

  9. Bio-processing of solid wastes and secondary resources for metal extraction - A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jae-chun; Pandey, Banshi Dhar, E-mail: bd_pandey@yahoo.co.uk; CSIR - National Metallurgical Laboratory, Jamshedpur 831007

    2012-01-15

    Highlights: Black-Right-Pointing-Pointer Review focuses on bio-extraction of metals from solid wastes of industries and consumer goods. Black-Right-Pointing-Pointer Bio-processing of certain effluents/wastewaters with metals is also included in brief. Black-Right-Pointing-Pointer Quantity/composition of wastes are assessed, and microbes used and leaching conditions included. Black-Right-Pointing-Pointer Bio-recovery using bacteria, fungi and archaea is highlighted for resource recycling. Black-Right-Pointing-Pointer Process methodology/mechanism, R and D direction and scope of large scale use are briefly included. - Abstract: Metal containing wastes/byproducts of various industries, used consumer goods, and municipal waste are potential pollutants, if not treated properly. They may also be important secondary resources if processed inmore » eco-friendly manner for secured supply of contained metals/materials. Bio-extraction of metals from such resources with microbes such as bacteria, fungi and archaea is being increasingly explored to meet the twin objectives of resource recycling and pollution mitigation. This review focuses on the bio-processing of solid wastes/byproducts of metallurgical and manufacturing industries, chemical/petrochemical plants, electroplating and tanning units, besides sewage sludge and fly ash of municipal incinerators, electronic wastes (e-wastes/PCBs), used batteries, etc. An assessment has been made to quantify the wastes generated and its compositions, microbes used, metal leaching efficiency etc. Processing of certain effluents and wastewaters comprising of metals is also included in brief. Future directions of research are highlighted.« less

  10. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  11. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  12. Testing of the BipiColombo Antenna Pointing Mechanism

    NASA Astrophysics Data System (ADS)

    Campo, Pablo; Barrio, Aingeru; Martin, Fernando

    2015-09-01

    BepiColombo is an ESA mission to Mercury, its planetary orbiter (MPO) has two antenna pointing mechanism, High gain antenna (HGA) pointing mechanism steers and points a large reflector which is integrated at system level by TAS-I Rome. Medium gain antenna (MGA) APM points a 1.5 m boom with a horn antenna. Both radiating elements are exposed to sun fluxes as high as 10 solar constants without protections.A previous paper [1] described the design and development process to solve the challenges of performing in harsh environment.. Current paper is focused on the testing process of the qualification units. Testing performance of antenna pointing mechanism in its specific environmental conditions has required special set-up and techniques. The process has provided valuable feedback on the design and the testing methods which have been included in the PFM design and tests.Some of the technologies and components were developed on dedicated items priort to EQM, but once integrated, test behaviour had relevant differences.Some of the major concerns for the APM testing are:- Create during the thermal vacuum testing the qualification temperature map with gradients along the APM. From of 200oC to 70oC.- Test in that conditions the radio frequency and pointing performances adding also high RF power to check the power handling and self-heating of the rotary joint.- Test in life up to 12000 equivalent APM revolutions, that is 14.3 million motor revolutions in different thermal conditions.- Measure low thermal distortion of the mechanical chain, being at the same time insulated from external environment and interfaces (55 arcsec pointing error)- Perform deployment of large items guaranteeing during the process low humidity, below 5% to protect dry lubrication- Verify stability with representative inertia of large boom or reflector 20 Kgm2.

  13. Pre-processing by data augmentation for improved ellipse fitting.

    PubMed

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  14. A Process of Multidisciplinary Team Communication to Individualize Stroke Rehabilitation of an 84-Year-Old Stroke Patient.

    PubMed

    Hiragami, Fukumi; Hiragami, Shogo; Suzuki, Yasuo

    Previously, we have used a multidisciplinary team (MDT) approach to individualize rehabilitation of very old stroke patients as a means to establish intervention points for addressing impaired activities of daily living (ADL). However, this previous study was limited because of a lack in describing the communication process over time. This case study characterized the MDT communication process in the rehabilitation of an 84-year-old patient over the course of 15 weeks. The MDT consisted of 3 nurses, 1 doctor, 6 therapists, and the patient/families. Meetings (15 minutes each) were held at 4, 6, 8, and 15 weeks following the patient's admission. To individualize the rehabilitation, the communication process involved gaining knowledge about ADL impairments, sharing assessments, providing treatment options, and reflecting on desired treatment outcomes-a process termed KATR. The knowledge, assessment, treatment, and reflection (KATR) process established intervention points focusing on specific ADL impairments. The team members focused the interventions on the impaired ADL identified in the KATR process, and individualized rehabilitation was generated from the MDT information-sharing knowledge. In the initial meeting (Week 4), intervention points derived from the KATR process focused on rehabilitation of self-care impairments. These impairments improved by Week 15. By the last meeting, the MDT intervention points focused on mobility impairments. Having an organized communication process (i.e., KATR) facilitates individualization of rehabilitation without lengthy and frequent MDT meetings and enhances the quality of rehabilitation after a stroke.

  15. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  16. 40 CFR 63.761 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of hydrocarbon liquids or natural gas: after processing and/or treatment in the producing operations... point at which such liquids or natural gas enters a natural gas processing plant is a point of custody... dehydration unit is passed to remove entrained gas and hydrocarbon liquid. The GCG separator is commonly...

  17. 40 CFR 63.761 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of hydrocarbon liquids or natural gas: after processing and/or treatment in the producing operations... point at which such liquids or natural gas enters a natural gas processing plant is a point of custody... dehydration unit is passed to remove entrained gas and hydrocarbon liquid. The GCG separator is commonly...

  18. 40 CFR 63.761 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of hydrocarbon liquids or natural gas: after processing and/or treatment in the producing operations... point at which such liquids or natural gas enters a natural gas processing plant is a point of custody... dehydration unit is passed to remove entrained gas and hydrocarbon liquid. The GCG separator is commonly...

  19. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  20. Entanglement properties of the antiferromagnetic-singlet transition in the Hubbard model on bilayer square lattices

    DOE PAGES

    Chang, Chia-Chen; Singh, Rajiv R. P.; Scalettar, Richard T.

    2014-10-10

    Here, we calculate the bipartite R enyi entanglement entropy of an L x L x 2 bilayer Hubbard model using a determinantal quantum Monte Carlo method recently proposed by Grover [Phys. Rev. Lett. 111, 130402 (2013)]. Two types of bipartition are studied: (i) One that divides the lattice into two L x L planes, and (ii) One that divides the lattice into two equal-size (L x L=2 x 2) bilayers. Furthermore, we compare our calculations with those for the tight-binding model studied by the correlation matrix method. As expected, the entropy for bipartition (i) scales as L 2, while themore » latter scales with L with possible logarithmic corrections. The onset of the antiferromagnet to singlet transition shows up by a saturation of the former to a maximal value and the latter to a small value in the singlet phase. We also comment on the large uncertainties in the numerical results with increasing U, which would have to be overcome before the critical behavior and logarithmic corrections can be quanti ed.« less

  1. Des proprietes de l'etat normal du modele de Hubbard bidimensionnel

    NASA Astrophysics Data System (ADS)

    Lemay, Francois

    Depuis leur decouverte, les etudes experimentales ont demontre que les supra-conducteurs a haute temperature ont une phase normale tres etrange. Les proprietes de ces materiaux ne sont pas bien decrites par la theorie du liquide de Fermi. Le modele de Hubbard bidimensionnel, bien qu'il ne soit pas encore resolu, est toujours considere comme un candidat pour expliquer la physique de ces composes. Dans cet ouvrage, nous mettons en evidence plusieurs proprietes electroniques du modele qui sont incompatibles avec l'existence de quasi-particules. Nous montrons notamment que la susceptibilite des electrons libres sur reseau contient des singularites logarithmiques qui influencent de facon determinante les proprietes de la self-energie a basse frequence. Ces singularites sont responsables de la destruction des quasi-particules. En l'absence de fluctuations antiferromagnetiques, elles sont aussi responsables de l'existence d'un petit pseudogap dans le poids spectral au niveau de Fermi. Les proprietes du modele sont egalement etudiees pour une surface de Fermi similaire a celle des supraconducteurs a haute temperature. Un parallele est etabli entre certaines caracteristiques du modele et celles de ces materiaux.

  2. Salud mental en desastres naturales: estrategias interventivas con adultos mayores en sectores rurales de Chile.

    PubMed

    Osorio-Parraguez, Paulina; Espinoza, Adriana

    2016-06-01

    En el presente artículo se da a conocer una estrategia de intervención llevada a cabo con adultos mayores en la comuna de Paredones, sexta región de Chile, con posterioridad al terremoto y tsunami del 27 de febrero 2010 en Chile, en el contexto de una investigación sobre fortalezas y vulnerabilidades desplegadas por este grupo etario, con posterioridad a un desastre natural. Se presenta una descripción del desarrollo metodológico de la intervención y de los sustentos teóricos y conceptuales en los que se basa. Como resultado de este proceso, se propone una estrategia que trabaje a través de la identificación de las propias experiencias y fortalezas de los sujetos. De tal forma se minimizan los efectos negativos de los determinantes sociales de la salud (como la edad y el lugar de residencia) en contexto de crisis; permitiendo a los adultos mayores fortalecer sus recursos individuales y colectivos, en pro de su bienestar psicosocial. © The Author(s) 2015.

  3. Influence of prestress and periodic corrugated boundary surfaces on Rayleigh waves in an orthotropic medium over a transversely isotropic dissipative semi-infinite substrate

    NASA Astrophysics Data System (ADS)

    Gupta, Shishir; Ahmed, Mostaid

    2017-01-01

    The paper environs the study of Rayleigh-type surface waves in an orthotropic crustal layer over a transversely isotropic dissipative semi-infinite medium under the effect of prestress and corrugated boundary surfaces. Separate displacement components for both media have been derived in order to characterize the dynamics of individual materials. Suitable boundary conditions have been employed upon the surface wave solutions of the elasto-dynamical equations that are taken into consideration in the light of corrugated boundary surfaces. From the real part of the sixth-order complex determinantal expression, we obtain the frequency equation for Rayleigh waves concerning the proposed earth model. Possible special cases have been envisaged and they fairly comply with the corresponding results for classical cases. Numerical computations have been performed in order to graphically demonstrate the role of the thickness of layer, prestress, corrugation parameters and dissipation on Rayleigh wave velocity. The study may be regarded as important due to its possible applications in delay line services and investigating deformation characteristics of solids as well as typical rock formations.

  4. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  5. The role of the optimization process in illumination design

    NASA Astrophysics Data System (ADS)

    Gauvin, Michael A.; Jacobsen, David; Byrne, David J.

    2015-07-01

    This paper examines the role of the optimization process in illumination design. We will discuss why the starting point of the optimization process is crucial to a better design and why it is also important that the user understands the basic design problem and implements the correct merit function. Both a brute force method and the Downhill Simplex method will be used to demonstrate optimization methods with focus on using interactive design tools to create better starting points to streamline the optimization process.

  6. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  7. 36 CFR 907.6 - Major decision points.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... decisionmaking process. For most Corporation projects there are three distinct stages in the decision making...) Implementation stage. (b) Environmental review will be integrated into the decision making process of the... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Major decision points. 907.6...

  8. 40 CFR 63.761 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... means hydrocarbon (petroleum) liquid with an initial producing gas-to-oil ratio (GOR) less than 0.31... of hydrocarbon liquids or natural gas: after processing and/or treatment in the producing operations... point at which such liquids or natural gas enters a natural gas processing plant is a point of custody...

  9. 40 CFR 63.761 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... means hydrocarbon (petroleum) liquid with an initial producing gas-to-oil ratio (GOR) less than 0.31... of hydrocarbon liquids or natural gas: after processing and/or treatment in the producing operations... point at which such liquids or natural gas enters a natural gas processing plant is a point of custody...

  10. Understanding the Process of Contextualization

    ERIC Educational Resources Information Center

    Wyatt, Tasha

    2015-01-01

    The literature on culture and education points to the importance of using students' cultural knowledge in the teaching and learning process. While the theory of culturally relevant education has expanded in the last several decades, the practical implementation continues to lag far behind. This disparity points to the lack of tools and other…

  11. Apparatus and method for implementing power saving techniques when processing floating point values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young Moon; Park, Sang Phill

    An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.

  12. Nonlinear digital signal processing in mental health: characterization of major depression using instantaneous entropy measures of heartbeat dynamics.

    PubMed

    Valenza, Gaetano; Garcia, Ronald G; Citi, Luca; Scilingo, Enzo P; Tomaz, Carlos A; Barbieri, Riccardo

    2015-01-01

    Nonlinear digital signal processing methods that address system complexity have provided useful computational tools for helping in the diagnosis and treatment of a wide range of pathologies. More specifically, nonlinear measures have been successful in characterizing patients with mental disorders such as Major Depression (MD). In this study, we propose the use of instantaneous measures of entropy, namely the inhomogeneous point-process approximate entropy (ipApEn) and the inhomogeneous point-process sample entropy (ipSampEn), to describe a novel characterization of MD patients undergoing affective elicitation. Because these measures are built within a nonlinear point-process model, they allow for the assessment of complexity in cardiovascular dynamics at each moment in time. Heartbeat dynamics were characterized from 48 healthy controls and 48 patients with MD while emotionally elicited through either neutral or arousing audiovisual stimuli. Experimental results coming from the arousing tasks show that ipApEn measures are able to instantaneously track heartbeat complexity as well as discern between healthy subjects and MD patients. Conversely, standard heart rate variability (HRV) analysis performed in both time and frequency domains did not show any statistical significance. We conclude that measures of entropy based on nonlinear point-process models might contribute to devising useful computational tools for care in mental health.

  13. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  14. Voronoi Tessellation for reducing the processing time of correlation functions

    NASA Astrophysics Data System (ADS)

    Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio

    2018-01-01

    The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.

  15. 2D modeling of direct laser metal deposition process using a finite particle method

    NASA Astrophysics Data System (ADS)

    Anedaf, T.; Abbès, B.; Abbès, F.; Li, Y. M.

    2018-05-01

    Direct laser metal deposition is one of the material additive manufacturing processes used to produce complex metallic parts. A thorough understanding of the underlying physical phenomena is required to obtain a high-quality parts. In this work, a mathematical model is presented to simulate the coaxial laser direct deposition process tacking into account of mass addition, heat transfer, and fluid flow with free surface and melting. The fluid flow in the melt pool together with mass and energy balances are solved using the Computational Fluid Dynamics (CFD) software NOGRID-points, based on the meshless Finite Pointset Method (FPM). The basis of the computations is a point cloud, which represents the continuum fluid domain. Each finite point carries all fluid information (density, velocity, pressure and temperature). The dynamic shape of the molten zone is explicitly described by the point cloud. The proposed model is used to simulate a single layer cladding.

  16. Relationship between quality improvement processes and clinical performance.

    PubMed

    Damberg, Cheryl L; Shortell, Stephen M; Raube, Kristiana; Gillies, Robin R; Rittenhouse, Diane; McCurdy, Rodney K; Casalino, Lawrence P; Adams, John

    2010-08-01

    To examine the association between performance on clinical process measures and intermediate outcomes and the use of chronic care management processes (CMPs), electronic medical record (EMR) capabilities, and participation in external quality improvement (QI) initiatives. Cross-sectional analysis of linked 2006 clinical performance scores from the Integrated Healthcare Association's pay-for-performance program and survey data from the 2nd National Study of Physician Organizations among 108 California physician organizations (POs). Controlling for differences in PO size, organization type (medical group or independent practice association), and Medicaid revenue, we used ordinary least squares regression analysis to examine the association between the use of CMPs, EMR capabilities, and external QI initiatives and performance on the following 3 clinical composite measures: diabetes management, processes of care, and intermediate outcomes (diabetes and cardiovascular). Greater use of CMPs was significantly associated with clinical performance: among POs using more than 5 CMPs, we observed a 3.2-point higher diabetes management score on a performance scale with scores ranging from 0 to 100 (P <.001), while for each 1.0-point increase on the CMP index, we observed a 1.0-point gain in intermediate outcomes (P <.001). Participation in external QI initiatives was positively associated with improved delivery of clinical processes of care: a 1.0-point increase on the QI index translated into a 1.4-point gain in processes-of-care performance (P = .02). No relationship was observed between EMR capabilities and performance. Greater investments in CMPs and QI interventions may help POs raise clinical performance and achieve success under performance-based accountability schemes.

  17. The application of prototype point processes for the summary and description of California wildfires

    USGS Publications Warehouse

    Nichols, K.; Schoenberg, F.P.; Keeley, J.E.; Bray, A.; Diez, D.

    2011-01-01

    A method for summarizing repeated realizations of a space-time marked point process, known as prototyping, is discussed and applied to catalogues of wildfires in California. Prototype summaries are constructed for varying time intervals using California wildfire data from 1990 to 2006. Previous work on prototypes for temporal and space-time point processes is extended here to include methods for computing prototypes with marks and the incorporation of prototype summaries into hierarchical clustering algorithms, the latter of which is used to delineate fire seasons in California. Other results include summaries of patterns in the spatial-temporal distribution of wildfires within each wildfire season. ?? 2011 Blackwell Publishing Ltd.

  18. Robotic tool positioning process using a multi-line off-axis laser triangulation sensor

    NASA Astrophysics Data System (ADS)

    Pinto, T. C.; Matos, G.

    2018-03-01

    Proper positioning of a friction stir welding head for pin insertion, driven by a closed chain robot, is important to ensure quality repair of cracks. A multi-line off-axis laser triangulation sensor was designed to be integrated to the robot, allowing relative measurements of the surface to be repaired. This work describes the sensor characteristics, its evaluation and the measurement process for tool positioning to a surface point of interest. The developed process uses a point of interest image and a measured point cloud to define the translation and rotation for tool positioning. Sensor evaluation and tests are described. Keywords: laser triangulation, 3D measurement, tool positioning, robotics.

  19. Mass Measurements beyond the Major r-Process Waiting Point {sup 80}Zn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baruah, S.; Herlert, A.; Schweikhard, L.

    2008-12-31

    High-precision mass measurements on neutron-rich zinc isotopes {sup 71m,72-81}Zn have been performed with the Penning trap mass spectrometer ISOLTRAP. For the first time, the mass of {sup 81}Zn has been experimentally determined. This makes {sup 80}Zn the first of the few major waiting points along the path of the astrophysical rapid neutron-capture process where neutron-separation energy and neutron-capture Q-value are determined experimentally. The astrophysical conditions required for this waiting point and its associated abundance signatures to occur in r-process models can now be mapped precisely. The measurements also confirm the robustness of the N=50 shell closure for Z=30.

  20. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  1. 21 CFR 120.8 - Hazard Analysis and Critical Control Point (HACCP) plan.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...: (i) Critical control points designed to control food hazards that are reasonably likely to occur and could be introduced inside the processing plant environment; and (ii) Critical control points designed... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Hazard Analysis and Critical Control Point (HACCP...

  2. 21 CFR 120.8 - Hazard Analysis and Critical Control Point (HACCP) plan.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...: (i) Critical control points designed to control food hazards that are reasonably likely to occur and could be introduced inside the processing plant environment; and (ii) Critical control points designed... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Hazard Analysis and Critical Control Point (HACCP...

  3. 21 CFR 123.6 - Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... identified food safety hazards, including as appropriate: (i) Critical control points designed to control... control points designed to control food safety hazards introduced outside the processing plant environment... Control Point (HACCP) plan. 123.6 Section 123.6 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF...

  4. 21 CFR 123.6 - Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... identified food safety hazards, including as appropriate: (i) Critical control points designed to control... control points designed to control food safety hazards introduced outside the processing plant environment... Control Point (HACCP) plan. 123.6 Section 123.6 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF...

  5. 21 CFR 120.8 - Hazard Analysis and Critical Control Point (HACCP) plan.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...: (i) Critical control points designed to control food hazards that are reasonably likely to occur and could be introduced inside the processing plant environment; and (ii) Critical control points designed... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Hazard Analysis and Critical Control Point (HACCP...

  6. 21 CFR 120.8 - Hazard Analysis and Critical Control Point (HACCP) plan.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...: (i) Critical control points designed to control food hazards that are reasonably likely to occur and could be introduced inside the processing plant environment; and (ii) Critical control points designed... 21 Food and Drugs 2 2014-04-01 2014-04-01 false Hazard Analysis and Critical Control Point (HACCP...

  7. 21 CFR 120.8 - Hazard Analysis and Critical Control Point (HACCP) plan.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...: (i) Critical control points designed to control food hazards that are reasonably likely to occur and could be introduced inside the processing plant environment; and (ii) Critical control points designed... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Hazard Analysis and Critical Control Point (HACCP...

  8. 21 CFR 123.6 - Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... identified food safety hazards, including as appropriate: (i) Critical control points designed to control... control points designed to control food safety hazards introduced outside the processing plant environment... Control Point (HACCP) plan. 123.6 Section 123.6 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF...

  9. Referent control and motor equivalence of reaching from standing

    PubMed Central

    Tomita, Yosuke; Feldman, Anatol G.

    2016-01-01

    Motor actions may result from central changes in the referent body configuration, defined as the body posture at which muscles begin to be activated or deactivated. The actual body configuration deviates from the referent configuration, particularly because of body inertia and environmental forces. Within these constraints, the system tends to minimize the difference between these configurations. For pointing movement, this strategy can be expressed as the tendency to minimize the difference between the referent trajectory (RT) and actual trajectory (QT) of the effector (hand). This process may underlie motor equivalent behavior that maintains the pointing trajectory regardless of the number of body segments involved. We tested the hypothesis that the minimization process is used to produce pointing in standing subjects. With eyes closed, 10 subjects reached from a standing position to a remembered target located beyond arm length. In randomly chosen trials, hip flexion was unexpectedly prevented, forcing subjects to take a step during pointing to prevent falling. The task was repeated when subjects were instructed to intentionally take a step during pointing. In most cases, reaching accuracy and trajectory curvature were preserved due to adaptive condition-specific changes in interjoint coordination. Results suggest that referent control and the minimization process associated with it may underlie motor equivalence in pointing. NEW & NOTEWORTHY Motor actions may result from minimization of the deflection of the actual body configuration from the centrally specified referent body configuration, in the limits of neuromuscular and environmental constraints. The minimization process may maintain reaching trajectory and accuracy regardless of the number of body segments involved (motor equivalence), as confirmed in this study of reaching from standing in young healthy individuals. Results suggest that the referent control process may underlie motor equivalence in reaching. PMID:27784802

  10. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  11. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  12. Multi-star processing and gyro filtering for the video inertial pointing system

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.

    1976-01-01

    The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.

  13. 40 CFR 63.640 - Applicability and designation of affected source.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... reformer catalyst regeneration vents, and sulfur plant vents; and (5) Emission points routed to a fuel gas... required for refinery fuel gas systems or emission points routed to refinery fuel gas systems. (e) The... petroleum refining process unit that is subject to this subpart; (3) Units processing natural gas liquids...

  14. 40 CFR 63.640 - Applicability and designation of affected source.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... reformer catalyst regeneration vents, and sulfur plant vents; and (5) Emission points routed to a fuel gas... required for refinery fuel gas systems or emission points routed to refinery fuel gas systems. (e) The... petroleum refining process unit that is subject to this subpart; (3) Units processing natural gas liquids...

  15. The Point of Creative Frustration and the Creative Process: A New Look at an Old Model.

    ERIC Educational Resources Information Center

    Sapp, D. David

    1992-01-01

    This paper offers an extension of Graham Wallas' model of the creative process. It identifies periods of problem solving, incubation, and growth with specific points of initial idea inception, creative frustration, and illumination. Responses to creative frustration are described including denial, rationalization, acceptance of stagnation, and new…

  16. What Makes School Ethnography "Ethnographic?"

    ERIC Educational Resources Information Center

    Erickson, Frederick

    Ethnography as an inquiry process guided by a point of view rather than as a reporting process guided by a standard technique or set of techniques is the main point of this essay which suggests the application of Malinowski's theories and methods to an ethnology of the school, indicates reasons why traditional ethnography is inadequate to the…

  17. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  18. Safety, risk and mental health: decision-making processes prescribed by Australian mental health legislation.

    PubMed

    Smith-Merry, Jennifer; Caple, Andrew

    2014-03-01

    Adverse events in mental health care occur frequently and cause significant distress for those who experience them, derailing treatment and sometimes leading to death. These events are clustered around particular aspects of care and treatment and are therefore avoidable if practices in these areas are strengthened. The research reported in this article takes as its starting point coronial recommendations made in relation to mental health. We report on those points and processes in treatment and discharge where coronial recommendations are most frequently made. We then examine the legislative requirements around these points and processes in three Australian States. We find that the key areas that need to be strengthened to avoid adverse events are assessment processes, communication and information transfer, documentation, planning and training. We make recommendations for improvements in these key areas.

  19. Optimizing operating parameters of a honeycomb zeolite rotor concentrator for processing TFT-LCD volatile organic compounds with competitive adsorption characteristics.

    PubMed

    Lin, Yu-Chih; Chang, Feng-Tang

    2009-05-30

    In this study, we attempted to enhance the removal efficiency of a honeycomb zeolite rotor concentrator (HZRC), operated at optimal parameters, for processing TFT-LCD volatile organic compounds (VOCs) with competitive adsorption characteristics. The results indicated that when the HZRC processed a VOCs stream of mixed compounds, compounds with a high boiling point take precedence in the adsorption process. In addition, existing compounds with a low boiling point adsorbed onto the HZRC were also displaced by the high-boiling-point compounds. In order to achieve optimal operating parameters for high VOCs removal efficiency, results suggested controlling the inlet velocity to <1.5m/s, reducing the concentration ratio to 8 times, increasing the desorption temperature to 200-225 degrees C, and setting the rotation speed to 6.5rpm.

  20. Measuring of temperatures of phase objects using a point-diffraction interferometer plate made with the thermocavitation process

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan C.; Berriel-Valdos, L. R.; Aguilar, J. Felix; Mejia-Romero, S.

    An optical system formed by four point-diffraction interferometers is used for measuring the refractive index distribution of a phase object. The phase of the object is assumed enough smooth to be computed in terms of the Radon Transform and it is processed with a tomographic iterative algorithm. Then, the associated refractive index distribution is calculated. To recovery the phase from the inteferograms we use the Kreis method, which is useful for interferograms having only few fringes. As an application of our technique, the temperature distribution of a candle flame is retrieved, this was made with the aid of the Gladstone-Dale equation. We also describe the process of manufacturing the point-diffraction interferometer (PDI) plates. These were made by means of the thermocavitation process. The obtained three dimensional distribution of temperature is presented.

  1. Neurosurgery certification in member societies of the World Federation of Neurosurgical Societies: Asia.

    PubMed

    Gasco, Jaime; Braun, Jonathan D; McCutcheon, Ian E; Black, Peter M

    2011-01-01

    To objectively compare the complexity and diversity of the certification process in neurological surgery in member societies of the World Federation of Neurosurgical Societies. This study centers in continental Asia. We provide here an analysis based on the responses provided to a 13-item survey. The data received were analyzed, and three Regional Complexity Scores (RCS) were designed. To compare national board experience, eligibility requirements for access to the certification process, and the obligatory nature of the examinations, an RCS-Organizational score was created (20 points maximum). To analyze the complexity of the examination, an RCS-Components score was designed (20 points maximum). The sum of both is presented in a Global RCS score. Only those countries that responded to the survey and presented nationwide homogeneity in the conduction of neurosurgery examinations could be included within the scoring system. In addition, a descriptive summary of the certification process per responding society is also provided. On the basis of the data provided by our RCS system, the highest global RCS was achieved by South Korea and Malaysia (21/40 points) followed by the joint examination of Singapore and Hong-Kong (FRCS-Ed) (20/40 points), Japan (17/40 points), the Philippines (15/40 points), and Taiwan (13 points). The experience from these leading countries should be of value to all countries within Asia. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  3. Improved whisker pointing technique for micron-size diode contact

    NASA Technical Reports Server (NTRS)

    Mattauch, R. J.; Green, G.

    1982-01-01

    Pointed phosphor-bronze whiskers are commonly used to contact micron-size Schottky barrier diodes. A process is presented which allows pointing such wire and achieving the desired cone angle and tip diameter without the use of highly undesirable chemical reagents.

  4. Point pattern analysis applied to flood and landslide damage events in Switzerland (1972-2009)

    NASA Astrophysics Data System (ADS)

    Barbería, Laura; Schulte, Lothar; Carvalho, Filipe; Peña, Juan Carlos

    2017-04-01

    Damage caused by meteorological and hydrological extreme events depends on many factors, not only on hazard, but also on exposure and vulnerability. In order to reach a better understanding of the relation of these complex factors, their spatial pattern and underlying processes, the spatial dependency between values of damage recorded at sites of different distances can be investigated by point pattern analysis. For the Swiss flood and landslide damage database (1972-2009) first steps of point pattern analysis have been carried out. The most severe events have been selected (severe, very severe and catastrophic, according to GEES classification, a total number of 784 damage points) and Ripley's K-test and L-test have been performed, amongst others. For this purpose, R's library spatstat has been used. The results confirm that the damage points present a statistically significant clustered pattern, which could be connected to prevalence of damages near watercourses and also to rainfall distribution of each event, together with other factors. On the other hand, bivariate analysis shows there is no segregated pattern depending on process type: flood/debris flow vs landslide. This close relation points to a coupling between slope and fluvial processes, connectivity between small-size and middle-size catchments and the influence of spatial distribution of precipitation, temperature (snow melt and snow line) and other predisposing factors such as soil moisture, land-cover and environmental conditions. Therefore, further studies will investigate the relationship between the spatial pattern and one or more covariates, such as elevation, distance from watercourse or land use. The final goal will be to perform a regression model to the data, so that the adjusted model predicts the intensity of the point process as a function of the above mentioned covariates.

  5. High resolution hybrid optical and acoustic sea floor maps (Invited)

    NASA Astrophysics Data System (ADS)

    Roman, C.; Inglis, G.

    2013-12-01

    This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.

  6. A statistical model investigating the prevalence of tuberculosis in New York City using counting processes with two change-points

    PubMed Central

    ACHCAR, J. A.; MARTINEZ, E. Z.; RUFFINO-NETTO, A.; PAULINO, C. D.; SOARES, P.

    2008-01-01

    SUMMARY We considered a Bayesian analysis for the prevalence of tuberculosis cases in New York City from 1970 to 2000. This counting dataset presented two change-points during this period. We modelled this counting dataset considering non-homogeneous Poisson processes in the presence of the two-change points. A Bayesian analysis for the data is considered using Markov chain Monte Carlo methods. Simulated Gibbs samples for the parameters of interest were obtained using WinBugs software. PMID:18346287

  7. Toward a formal verification of a floating-point coprocessor and its composition with a central processing unit

    NASA Technical Reports Server (NTRS)

    Pan, Jing; Levitt, Karl N.; Cohen, Gerald C.

    1991-01-01

    Discussed here is work to formally specify and verify a floating point coprocessor based on the MC68881. The HOL verification system developed at Cambridge University was used. The coprocessor consists of two independent units: the bus interface unit used to communicate with the cpu and the arithmetic processing unit used to perform the actual calculation. Reasoning about the interaction and synchronization among processes using higher order logic is demonstrated.

  8. The stability analysis of the nutrition restricted dynamic model of the microalgae biomass growth

    NASA Astrophysics Data System (ADS)

    Ratianingsih, R.; Fitriani, Nacong, N.; Resnawati, Mardlijah, Widodo, B.

    2018-03-01

    The biomass production is very essential in microalgae farming such that its growth rate is very important to be determined. This paper proposes the dynamics model of it that restricted by its nutrition. The model is developed by considers some related processes that are photosynthesis, respiration, nutrition absorption, stabilization, lipid synthesis and CO2 mobilization. The stability of the dynamical system that represents the processes is analyzed using the Jacobian matrix of the linearized system in the neighborhood of its critical point. There is a lipid formation threshold needed to require its existence. In such case, the absorption rate of respiration process has to be inversely proportional to the absorption rate of CO2 due to photosynthesis process. The Pontryagin minimal principal also shows that there are some requirements needed to have a stable critical point, such as the rate of CO2 released rate, due to the stabilization process that is restricted by 50%, and the threshold of its shifted critical point. In case of the rate of CO2 released rate due to the photosynthesis process is restricted in such interval; the stability of the model at the critical point could not be satisfied anymore. The simulation shows that the external nutrition plays a role in glucose formation such that sufficient for the biomass growth and the lipid production.

  9. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    NASA Astrophysics Data System (ADS)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  10. Explanation of the Reaction of Monoclonal Antibodies with Candida Albicans Cell Surface in Terms of Compound Poisson Process

    NASA Astrophysics Data System (ADS)

    Dudek, Mirosław R.; Mleczko, Józef

    Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.

  11. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  12. 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturgeon, Richard W.

    This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less

  13. 40 CFR 63.620 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... requirements of this subpart apply to the following emission points which are components of a granular triple... to the following emission points which are components of a granular triple superphosphate storage... emission points which are components of a diammonium and/or monoammonium phosphate process line: reactors...

  14. 40 CFR 63.620 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... requirements of this subpart apply to the following emission points which are components of a granular triple... to the following emission points which are components of a granular triple superphosphate storage... emission points which are components of a diammonium and/or monoammonium phosphate process line: reactors...

  15. 40 CFR 98.426 - Data reporting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... flow meter in your process chain in relation to the points of CO2 stream capture, dehydration... concentration. (7) The location of the flow meter in your process chain in relation to the points of CO2 stream... meter(s) in metric tons. (iii) The total annual CO2 mass supplied in metric tons. (iv) The location of...

  16. 40 CFR 98.426 - Data reporting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... flow meter in your process chain in relation to the points of CO2 stream capture, dehydration... concentration. (7) The location of the flow meter in your process chain in relation to the points of CO2 stream... meter(s) in metric tons. (iii) The total annual CO2 mass supplied in metric tons. (iv) The location of...

  17. 40 CFR 98.426 - Data reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... flow meter in your process chain in relation to the points of CO2 stream capture, dehydration... concentration. (7) The location of the flow meter in your process chain in relation to the points of CO2 stream... meter(s) in metric tons. (iii) The total annual CO2 mass supplied in metric tons. (iv) The location of...

  18. Dynamic Models of Insurgent Activity

    DTIC Science & Technology

    2014-05-19

    Martin Short, P. Jeffrey Brantingham, Frederick Schoenberg, George Tita . Self-Exciting Point Process Modeling of Crime, Journal of the American...Mohler, P. J. Brantingham, G. E. Tita . Gang rivalry dynamics via coupled point process networks, Discrete and Continuous Dynamical Systems - Series...8532-2-1 Laura Smith, Andrea Bertozzi, P. Jeffrey Brantingham, George Tita , Matthew Valasik. ADAPTATION OF AN ECOLOGICAL TERRITORIAL MODEL TOSTREET

  19. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  20. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  1. Research on on-line monitoring technology for steel ball's forming process based on load signal analysis method

    NASA Astrophysics Data System (ADS)

    Li, Ying-jun; Ai, Chang-sheng; Men, Xiu-hua; Zhang, Cheng-liang; Zhang, Qi

    2013-04-01

    This paper presents a novel on-line monitoring technology to obtain forming quality in steel ball's forming process based on load signal analysis method, in order to reveal the bottom die's load characteristic in initial cold heading forging process of steel balls. A mechanical model of the cold header producing process is established and analyzed by using finite element method. The maximum cold heading force is calculated. The results prove that the monitoring on the cold heading process with upsetting force is reasonable and feasible. The forming defects are inflected on the three feature points of the bottom die signals, which are the initial point, infection point, and peak point. A novel PVDF piezoelectric force sensor which is simple on construction and convenient on installation is designed. The sensitivity of the PVDF force sensor is calculated. The characteristics of PVDF force sensor are analyzed by FEM. The PVDF piezoelectric force sensor is fabricated to acquire the actual load signals in the cold heading process, and calibrated by a special device. The measuring system of on-line monitoring is built. The characteristics of the actual signals recognized by learning and identification algorithm are in consistence with simulation results. Identification of actual signals shows that the timing difference values of all feature points for qualified products are not exceed ±6 ms, and amplitude difference values are less than ±3%. The calibration and application experiments show that PVDF force sensor has good static and dynamic performances, and is competent at dynamic measuring on upsetting force. It greatly improves automatic level and machining precision. Equipment capacity factor with damages identification method depends on grade of steel has been improved to 90%.

  2. Use of a Closed-Loop Tracking Algorithm for Orientation Bias Determination of an S-Band Ground Station

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.; Piasecki, Marie T.; Schrage, Dean S.

    2015-01-01

    The Space Communications and Navigation (SCaN) Testbed project completed installation and checkout testing of a new S-Band ground station at the NASA Glenn Research Center in Cleveland, Ohio in 2015. As with all ground stations, a key alignment process must be conducted to obtain offset angles in azimuth (AZ) and elevation (EL). In telescopes with AZ-EL gimbals, this is normally done with a two-star alignment process, where telescope-based pointing vectors are derived from catalogued locations with the AZ-EL bias angles derived from the pointing vector difference. For an antenna, the process is complicated without an optical asset. For the present study, the solution was to utilize the gimbal control algorithms closed-loop tracking capability to acquire the peak received power signal automatically from two distinct NASA Tracking and Data Relay Satellite (TDRS) spacecraft, without a human making the pointing adjustments. Briefly, the TDRS satellite acts as a simulated optical source and the alignment process proceeds exactly the same way as a one-star alignment. The data reduction process, which will be discussed in the paper, results in two bias angles which are retained for future pointing determination. Finally, the paper compares the test results and provides lessons learned from the activity.

  3. Human intronless genes: Functional groups, associated diseases, evolution, and mRNA processing in absence of splicing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grzybowska, Ewa A., E-mail: ewag@coi.waw.pl

    2012-07-20

    Highlights: Black-Right-Pointing-Pointer Functional characteristics of intronless genes (IGs). Black-Right-Pointing-Pointer Diseases associated with IGs. Black-Right-Pointing-Pointer Origin and evolution of IGs. Black-Right-Pointing-Pointer mRNA processing without splicing. -- Abstract: Intronless genes (IGs) constitute approximately 3% of the human genome. Human IGs are essentially different in evolution and functionality from the IGs of unicellular eukaryotes, which represent the majority in their genomes. Functional analysis of IGs has revealed a massive over-representation of signal transduction genes and genes encoding regulatory proteins important for growth, proliferation, and development. IGs also often display tissue-specific expression, usually in the nervous system and testis. These characteristics translate into IG-associatedmore » diseases, mainly neuropathies, developmental disorders, and cancer. IGs represent recent additions to the genome, created mostly by retroposition of processed mRNAs with retained functionality. Processing, nuclear export, and translation of these mRNAs should be hampered dramatically by the lack of splice factors, which normally tightly cover mature transcripts and govern their fate. However, natural IGs manage to maintain satisfactory expression levels. Different mechanisms by which IGs solve the problem of mRNA processing and nuclear export are discussed here, along with their possible impact on reporter studies.« less

  4. Hierarchical species distribution models

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  5. Steelmaking process control using remote ultraviolet atomic emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Arnold, Samuel

    Steelmaking in North America is a multi-billion dollar industry that has faced tremendous economic and environmental pressure over the past few decades. Fierce competition has driven steel manufacturers to improve process efficiency through the development of real-time sensors to reduce operating costs. In particular, much attention has been focused on end point detection through furnace off gas analysis. Typically, off-gas analysis is done with extractive sampling and gas analyzers such as Non-dispersive Infrared Sensors (NDIR). Passive emission spectroscopy offers a more attractive approach to end point detection as the equipment can be setup remotely. Using high resolution UV spectroscopy and applying sophisticated emission line detection software, a correlation was observed between metal emissions and the process end point during field trials. This correlation indicates a relationship between the metal emissions and the status of a steelmaking melt which can be used to improve overall process efficiency.

  6. A Comparison of Crater-Size Scaling and Ejection-Speed Scaling During Experimental Impacts in Sand

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Cintala, M. J.; Johnson, M. K.

    2014-01-01

    Non-dimensional scaling relationships are used to understand various cratering processes including final crater sizes and the excavation of material from a growing crater. The principal assumption behind these scaling relationships is that these processes depend on a combination of the projectile's characteristics, namely its diameter, density, and impact speed. This simplifies the impact event into a single point-source. So long as the process of interest is beyond a few projectile radii from the impact point, the point-source assumption holds. These assumptions can be tested through laboratory experiments in which the initial conditions of the impact are controlled and resulting processes measured directly. In this contribution, we continue our exploration of the congruence between crater-size scaling and ejection-speed scaling relationships. In particular, we examine a series of experimental suites in which the projectile diameter and average grain size of the target are varied.

  7. 12 CFR 1815.105 - Major decision points.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... ENVIRONMENTAL QUALITY § 1815.105 Major decision points. (a) The possible environmental effects of an Application... decisionmaking process: (1) Preliminary approval stage, at which point applications are selected for funding; and (2) Final approval and funding stage. (b) Environmental review shall be integrated into the...

  8. 12 CFR 1815.105 - Major decision points.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... ENVIRONMENTAL QUALITY § 1815.105 Major decision points. (a) The possible environmental effects of an Application... decisionmaking process: (1) Preliminary approval stage, at which point applications are selected for funding; and (2) Final approval and funding stage. (b) Environmental review shall be integrated into the...

  9. 12 CFR 1815.105 - Major decision points.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... ENVIRONMENTAL QUALITY § 1815.105 Major decision points. (a) The possible environmental effects of an Application... decisionmaking process: (1) Preliminary approval stage, at which point applications are selected for funding; and (2) Final approval and funding stage. (b) Environmental review shall be integrated into the...

  10. Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2017-12-01

    An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.

  11. A numerical analysis on forming limits during spiral and concentric single point incremental forming

    NASA Astrophysics Data System (ADS)

    Gipiela, M. L.; Amauri, V.; Nikhare, C.; Marcondes, P. V. P.

    2017-01-01

    Sheet metal forming is one of the major manufacturing industries, which are building numerous parts for aerospace, automotive and medical industry. Due to the high demand in vehicle industry and environmental regulations on less fuel consumption on other hand, researchers are innovating new methods to build these parts with energy efficient sheet metal forming process instead of conventionally used punch and die to form the parts to achieve the lightweight parts. One of the most recognized manufacturing process in this category is Single Point Incremental Forming (SPIF). SPIF is the die-less sheet metal forming process in which the single point tool incrementally forces any single point of sheet metal at any process time to plastic deformation zone. In the present work, finite element method (FEM) is applied to analyze the forming limits of high strength low alloy steel formed by single point incremental forming (SPIF) by spiral and concentric tool path. SPIF numerical simulations were model with 24 and 29 mm cup depth, and the results were compare with Nakajima results obtained by experiments and FEM. It was found that the cup formed with Nakajima tool failed at 24 mm while cups formed by SPIF surpassed the limit for both depths with both profiles. It was also notice that the strain achieved in concentric profile are lower than that in spiral profile.

  12. 7 CFR 4274.344 - Filing and processing applications for loans.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... full years: (i) At least 1 but less than 3 years—5 points; (ii) At least 3 but less than 5 years—10 points; (iii) At least 5 but less than 10 years—20 points; or (iv) 10 or more years—30 points. (5... ultimate recipients' projects. The amount of funds from other sources will average: (A) At least 10% but...

  13. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  14. Semiautomated skeletonization of the pulmonary arterial tree in micro-CT images

    NASA Astrophysics Data System (ADS)

    Hanger, Christopher C.; Haworth, Steven T.; Molthen, Robert C.; Dawson, Christopher A.

    2001-05-01

    We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel's axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized.

  15. Linear response theory and transient fluctuation relations for diffusion processes: a backward point of view

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Tong, Huan; Ma, Rui; Ou-Yang, Zhong-can

    2010-12-01

    A formal apparatus is developed to unify derivations of the linear response theory and a variety of transient fluctuation relations for continuous diffusion processes from a backward point of view. The basis is a perturbed Kolmogorov backward equation and the path integral representation of its solution. We find that these exact transient relations could be interpreted as a consequence of a generalized Chapman-Kolmogorov equation, which intrinsically arises from the Markovian characteristic of diffusion processes.

  16. Use of Isobestic and Isoemission Points in Absorption and Luminescence Spectra for Study of the Transformation of Radiation Defects in Lithium Fluoride

    NASA Astrophysics Data System (ADS)

    Voitovich, A. P.; Kalinov, V. S.; Stupak, A. P.; Runets, L. P.

    2015-03-01

    Isobestic and isoemission points are recorded in the combined absorption and luminescence spectra of two types of radiation defects involved in complex processes consisting of several simultaneous parallel and sequential reactions. These points are observed if a constant sum of two terms, each formed by the product of the concentration of the corresponding defect and a characteristic integral coefficient associated with it, is conserved. The complicated processes involved in the transformation of radiation defects in lithium fluoride are studied using these points. It is found that the ratio of the changes in the concentrations of one of the components and the reaction product remains constant in the course of several simultaneous reactions.

  17. Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.

    2018-04-01

    Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corvellec, Herve, E-mail: herve.corvellec@ism.lu.se; Bramryd, Torleif

    Highlights: Black-Right-Pointing-Pointer Swedish municipally owned waste management companies are active on political, material, technical, and commercial markets. Black-Right-Pointing-Pointer These markets differ in kind and their demands follow different logics. Black-Right-Pointing-Pointer These markets affect the public service, processing, and marketing of Swedish waste management. Black-Right-Pointing-Pointer Articulating these markets is a strategic challenge for Swedish municipally owned waste management. - Abstract: This paper describes how the business model of two leading Swedish municipally owned solid waste management companies exposes them to four different but related markets: a political market in which their legitimacy as an organization is determined; a waste-as-material market thatmore » determines their access to waste as a process input; a technical market in which these companies choose what waste processing technique to use; and a commercial market in which they market their products. Each of these markets has a logic of its own. Managing these logics and articulating the interrelationships between these markets is a key strategic challenge for these companies.« less

  19. A technique for processing of planetary images with heterogeneous characteristics for estimating geodetic parameters of celestial bodies with the example of Ganymede

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Brusnikin, E. S.; Karachevtseva, I. P.; Oberst, J.

    2016-09-01

    The new technique for generation of coordinate control point networks based on photogrammetric processing of heterogeneous planetary images (obtained at different time, scale, with different illumination or oblique view) is developed. The technique is verified with the example for processing the heterogeneous information obtained by remote sensing of Ganymede by the spacecraft Voyager-1, -2 and Galileo. Using this technique the first 3D control point network for Ganymede is formed: the error of the altitude coordinates obtained as a result of adjustment is less than 5 km. The new control point network makes it possible to obtain basic geodesic parameters of the body (axes size) and to estimate forced librations. On the basis of the control point network, digital terrain models (DTMs) with different resolutions are generated and used for mapping the surface of Ganymede with different levels of detail (Zubarev et al., 2015b).

  20. The roles of the convex hull and the number of potential intersections in performance on visually presented traveling salesperson problems.

    PubMed

    Vickers, Douglas; Lee, Michael D; Dry, Matthew; Hughes, Peter

    2003-10-01

    The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, local-to-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchical solution process based on linking nearest neighbor clusters.

  1. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  2. A study on the evaporation process with multiple point-sources

    NASA Astrophysics Data System (ADS)

    Jun, Sunghoon; Kim, Minseok; Kim, Suk Han; Lee, Moon Yong; Lee, Eung Ki

    2013-10-01

    In Organic Light Emitting Display (OLED) manufacturing processes, there is a need to enlarge the mother glass substrate to raise its productivity and enable OLED TV. The larger the size of the glass substrate, the more difficult it is to establish a uniform thickness profile of the organic thin-film layer in the vacuum evaporation process. In this paper, a multiple point-source evaporation process is proposed to deposit a uniform organic layer uniformly. Using this method, a uniformity of 3.75% was achieved along a 1,300 mm length of Gen. 5.5 glass substrate (1300 × 1500 mm2).

  3. Safety Analysis of Soybean Processing for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Hentges, Dawn L.

    1999-01-01

    Soybeans (cv. Hoyt) is one of the crops planned for food production within the Advanced Life Support System Integration Testbed (ALSSIT), a proposed habitat simulation for long duration lunar/Mars missions. Soybeans may be processed into a variety of food products, including soymilk, tofu, and tempeh. Due to the closed environmental system and importance of crew health maintenance, food safety is a primary concern on long duration space missions. Identification of the food safety hazards and critical control points associated with the closed ALSSIT system is essential for the development of safe food processing techniques and equipment. A Hazard Analysis Critical Control Point (HACCP) model was developed to reflect proposed production and processing protocols for ALSSIT soybeans. Soybean processing was placed in the type III risk category. During the processing of ALSSIT-grown soybeans, critical control points were identified to control microbiological hazards, particularly mycotoxins, and chemical hazards from antinutrients. Critical limits were suggested at each CCP. Food safety recommendations regarding the hazards and risks associated with growing, harvesting, and processing soybeans; biomass management; and use of multifunctional equipment were made in consideration of the limitations and restraints of the closed ALSSIT.

  4. Vacuum pull down method for an enhanced bonding process

    DOEpatents

    Davidson, James C.; Balch, Joseph W.

    1999-01-01

    A process for effectively bonding arbitrary size or shape substrates. The process incorporates vacuum pull down techniques to ensure uniform surface contact during the bonding process. The essence of the process for bonding substrates, such as glass, plastic, or alloys, etc., which have a moderate melting point with a gradual softening point curve, involves the application of an active vacuum source to evacuate interstices between the substrates while at the same time providing a positive force to hold the parts to be bonded in contact. This enables increasing the temperature of the bonding process to ensure that the softening point has been reached and small void areas are filled and come in contact with the opposing substrate. The process is most effective where at least one of the two plates or substrates contain channels or grooves that can be used to apply vacuum between the plates or substrates during the thermal bonding cycle. Also, it is beneficial to provide a vacuum groove or channel near the perimeter of the plates or substrates to ensure bonding of the perimeter of the plates or substrates and reduce the unbonded regions inside the interior region of the plates or substrates.

  5. Determination of end point of primary drying in freeze-drying process control.

    PubMed

    Patel, Sajal M; Doen, Takayuki; Pikal, Michael J

    2010-03-01

    Freeze-drying is a relatively expensive process requiring long processing time, and hence one of the key objectives during freeze-drying process development is to minimize the primary drying time, which is the longest of the three steps in freeze-drying. However, increasing the shelf temperature into secondary drying before all of the ice is removed from the product will likely cause collapse or eutectic melt. Thus, from product quality as well as process economics standpoint, it is very critical to detect the end of primary drying. Experiments were conducted with 5% mannitol and 5% sucrose as model systems. The apparent end point of primary drying was determined by comparative pressure measurement (i.e., Pirani vs. MKS Baratron), dew point, Lyotrack (gas plasma spectroscopy), water concentration from tunable diode laser absorption spectroscopy, condenser pressure, pressure rise test (manometric temperature measurement or variations of this method), and product thermocouples. Vials were pulled out from the drying chamber using a sample thief during late primary and early secondary drying to determine percent residual moisture either gravimetrically or by Karl Fischer, and the cake structure was determined visually for melt-back, collapse, and retention of cake structure at the apparent end point of primary drying (i.e., onset, midpoint, and offset). By far, the Pirani is the best choice of the methods tested for evaluation of the end point of primary drying. Also, it is a batch technique, which is cheap, steam sterilizable, and easy to install without requiring any modification to the existing dryer.

  6. Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images

    NASA Astrophysics Data System (ADS)

    Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.

    2015-03-01

    In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.

  7. Point process analyses of variations in smoking rate by setting, mood, gender, and dependence

    PubMed Central

    Shiffman, Saul; Rathbun, Stephen L.

    2010-01-01

    The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683

  8. On the fractal characterization of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  9. Assessment of Photogrammetry Structure-from-Motion Compared to Terrestrial LiDAR Scanning for Generating Digital Elevation Models. Application to the Austre Lovéenbreen Polar Glacier Basin, Spitsbergen 79°N

    NASA Astrophysics Data System (ADS)

    Tolle, F.; Friedt, J. M.; Bernard, É.; Prokop, A.; Griselin, M.

    2014-12-01

    Digital Elevation Model (DEM) is a key tool for analyzing spatially dependent processes including snow accumulation on slopes or glacier mass balance. Acquiring DEM within short time intervals provides new opportunities to evaluate such phenomena at the daily to seasonal rates.DEMs are usually generated from satellite imagery, aerial photography, airborne and ground-based LiDAR, and GPS surveys. In addition to these classical methods, we consider another alternative for periodic DEM acquisition with lower logistics requirements: digital processing of ground based, oblique view digital photography. Such a dataset, acquired using commercial off the shelf cameras, provides the source for generating elevation models using Structure from Motion (SfM) algorithms. Sets of pictures of a same structure but taken from various points of view are acquired. Selected features are identified on the images and allow for the reconstruction of the three-dimensional (3D) point cloud after computing the camera positions and optical properties. This cloud point, generated in an arbitrary coordinate system, is converted to an absolute coordinate system either by adding constraints of Ground Control Points (GCP), or including the (GPS) position of the cameras in the processing chain. We selected the opensource digital signal processing library provided by the French Geographic Institute (IGN) called Micmac for its fine processing granularity and the ability to assess the quality of each processing step.Although operating in snow covered environments appears challenging due to the lack of relevant features, we observed that enough reference points could be identified for 3D reconstruction. Despite poor climatic environment of the Arctic region considered (Ny Alesund area, 79oN) is not a problem for SfM, the low lying spring sun and the cast shadows appear as a limitation because of the lack of color dynamics in the digital cameras we used. A detailed understanding of the processing steps is mandatory during the image acquisition phase: compliance with acquisition rules reducing digital processing errors helps minimizing the uncertainty on the point cloud absolute position in its coordinate system. 3D models from SfM are compared with terrestrial LiDAR acquisitions for resolution assesment.

  10. Obesity Prevention at the Point of Purchase

    PubMed Central

    Cohen, Deborah A.; Lesser, Lenard I.

    2017-01-01

    The point of purchase is when people may make poor and impulsive decisions about what and how much to buy and consume. Since point of purchase strategies frequently work through non-cognitive processes, people are often unable to recognize and resist them. Because people lack insight into how marketing practices interfere with their ability to routinely eat healthy, balanced diets, public health entities should protect consumers from point of purchase strategies. We describe four point of purchase policy options including standardized portion sizes; standards for meals that are sold as a bundle, e.g. “combo meals”; placement and marketing restrictions on highly processed low-nutrient foods; and explicit warning labels. Adoption of such policies could contribute significantly to the prevention of obesity and diet-related chronic diseases. We also discuss how the policies could be implemented, along with who might favor or oppose them. Many of the policies can be implemented locally, while preserving consumer choice. PMID:26910361

  11. The Improvement of the Closed Bounded Volume (CBV) Evaluation Methods to Compute a Feasible Rough Machining Area Based on Faceted Models

    NASA Astrophysics Data System (ADS)

    Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos

    2017-06-01

    The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.

  12. Succinonitrile Purification Facility

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Succinonitrile (SCN) Purification Facility provides succinonitrile and succinonitrile alloys to several NRA selected investigations for flight and ground research at various levels of purity. The purification process employed includes both distillation and zone refining. Once the appropriate purification process is completed, samples are characterized to determine the liquidus and/or solidus temperature, which is then related to sample purity. The lab has various methods for measuring these temperatures with accuracies in the milliKelvin to tenths of milliKelvin range. The ultra-pure SCN produced in our facility is indistinguishable from the standard material provided by NIST to well within the stated +/- 1.5mK of the NIST triple point cells. In addition to delivering material to various investigations, our current activities include process improvement, characterization of impurities and triple point cell design and development. The purification process is being evaluated for each of the four vendors to determine the efficacy of each purification step. We are also collecting samples of the remainder from distillation and zone refining for analysis of the constituent impurities. The large triple point cells developed will contain SCN with a melting point of 58.0642 C +/- 1.5mK for use as a calibration standard for Standard Platinum Resistance Thermometers (SPRTs).

  13. Advanced mobility handover for mobile IPv6 based wireless networks.

    PubMed

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an Advanced Mobility Handover scheme (AMH) in this paper for seamless mobility in MIPv6-based wireless networks. In the proposed scheme, the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care-of-address during the roaming process. The IPv6 address for each MN during the first round of AMH process is uniquely identified by HA using the developed MN-ID field as a global permanent, which is identifying uniquely the IPv6 address of MN. Moreover, a temporary MN-ID is generated by access point each time an MN is associated with a particular AP and temporarily saved in a developed table inside the AP. When employing the AMH scheme, the handover process in the network layer is performed prior to its default time. That is, the mobility handover process in the network layer is tackled by a trigger developed AMH message to the next access point. Thus, a mobile node keeps communicating with the current access point while the network layer handover is executed by the next access point. The mathematical analyses and simulation results show that the proposed scheme performs better as compared with the existing approaches.

  14. Effect of deposition rate on melting point of copper film catalyst substrate at atomic scale

    NASA Astrophysics Data System (ADS)

    Marimpul, Rinaldo; Syuhada, Ibnu; Rosikhin, Ahmad; Winata, Toto

    2018-03-01

    Annealing process of copper film catalyst substrate was studied by molcular dynamics simulation. This copper film catalyst substrate was produced using thermal evaporation method. The annealing process was limited in nanosecond order to observe the mechanism at atomic scale. We found that deposition rate parameter affected the melting point of catalyst substrate. The change of crystalline structure of copper atoms was observed before it had been already at melting point. The optimum annealing temperature was obtained to get the highest percentage of fcc structure on copper film catalyst substrate.

  15. An array processing system for lunar geochemical and geophysical data

    NASA Technical Reports Server (NTRS)

    Eliason, E. M.; Soderblom, L. A.

    1977-01-01

    A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.

  16. Instantaneous nonlinear assessment of complex cardiovascular dynamics by Laguerre-Volterra point process models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo

    2013-01-01

    We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.

  17. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1973-01-01

    With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.

  18. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  19. Conceptualising and managing trade-offs in sustainability assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison-Saunders, Angus, E-mail: A.Morrison-Saunders@murdoch.edu.au; School of Environmental Science, Murdoch University; Pope, Jenny

    One of the defining characteristics of sustainability assessment as a form of impact assessment is that it provides a forum for the explicit consideration of the trade-offs that are inherent in complex decision-making processes. Few sustainability assessments have achieved this goal though, and none has considered trade-offs in a holistic fashion throughout the process. Recent contributions such as the Gibson trade-off rules have significantly progressed thinking in this area by suggesting appropriate acceptability criteria for evaluating substantive trade-offs arising from proposed development, as well as process rules for how evaluations of acceptability should occur. However, there has been negligible uptakemore » of these rules in practice. Overall, we argue that there is inadequate consideration of trade-offs, both process and substantive, throughout the sustainability assessment process, and insufficient considerations of how process decisions and compromises influence substantive outcomes. This paper presents a framework for understanding and managing both process and substantive trade-offs within each step of a typical sustainability assessment process. The framework draws together previously published literature and offers case studies that illustrate aspects of the practical application of the framework. The framing and design of sustainability assessment are vitally important, as process compromises or trade-offs can have substantive consequences in terms of sustainability outcomes delivered, with the choice of alternatives considered being a particularly significant determinant of substantive outcomes. The demarcation of acceptable from unacceptable impacts is a key aspect of managing trade-offs. Offsets can be considered as a form of trade-off within a category of sustainability that are utilised to enhance preferred alternatives once conditions of impact acceptability have been met. In this way they may enable net gains to be delivered; another imperative for progress to sustainability. Understanding the nature and implications of trade-offs within sustainability assessment is essential to improving practice. - Highlights: Black-Right-Pointing-Pointer A framework for understanding trade-offs in sustainability assessment is presented. Black-Right-Pointing-Pointer Trade-offs should be considered as early as possible in any sustainability assessment process. Black-Right-Pointing-Pointer Demarcation of acceptable from unacceptable impacts is needed for effective trade-off management. Black-Right-Pointing-Pointer Offsets in place, time or kind can ensure and attain a net benefit outcome overall. Black-Right-Pointing-Pointer Gibson's trade-off rules provide useful acceptability criteria and process guidance.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amdursky, Nadav; Gazit, Ehud; Rosenman, Gil, E-mail: gilr@eng.tau.ac.il

    Highlights: Black-Right-Pointing-Pointer We observe lag-phase crystallization process in insulin. Black-Right-Pointing-Pointer The crystallization is a result of the formation of higher order oligomers. Black-Right-Pointing-Pointer The crystallization also changes the secondary structure of the protein. Black-Right-Pointing-Pointer The spectroscopic signature can be used for amyloid inhibitors assay. -- Abstract: Insulin, as other amyloid proteins, can form amyloid fibrils at certain conditions. The self-assembled aggregation process of insulin can result in a variety of conformations, starting from small oligomers, going through various types of protofibrils, and finishing with bundles of fibrils. One of the most common consensuses among the various self-assembly processes that aremore » suggested in the literature is the formation of an early stage nucleus conformation. Here we present an additional insight for the self-assembly process of insulin. We show that at the early lag phase of the process (prior to fibril formation) the insulin monomers self-assemble into ordered nanostructures. The most notable feature of this early self-assembly process is the formation of nanocrystalline nucleus regions with a strongly bound electron-hole confinement, which also change the secondary structure of the protein. Each step in the self-assembly process is characterized by an optical spectroscopic signature, and possesses a narrow size distribution. By following the spectroscopic signature we can measure the potency of amyloid fibrils inhibitors already at the lag phase. We further demonstrate it by the use of epigallocatechin gallate, a known inhibitor for insulin fibrils. The findings can result in a spectroscopic-based application for the analysis of amyloid fibrils inhibitors.« less

  1. Optimal Number and Allocation of Data Collection Points for Linear Spline Growth Curve Modeling: A Search for Efficient Designs

    ERIC Educational Resources Information Center

    Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.

    2017-01-01

    Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…

  2. 7 CFR 1980.451 - Filing and processing applications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... categories, and their point scores, are: (A) Project will contribute to the overall economic stability of the... points). (B) Project will contribute to the overall economic stability of the project area and will...'s economy (20 points). (C) Project will contribute to the overall economic stability of the project...

  3. Sharing Teaching Ideas.

    ERIC Educational Resources Information Center

    Touval, Ayana

    1992-01-01

    Introduces the concept of maximum and minimum function values as turning points on the function's graphic representation and presents a method for finding these values without using calculus. The process of utilizing transformations to find the turning point of a quadratic function is extended to find the turning points of cubic functions. (MDH)

  4. Point by Point: Adding up Motivation

    ERIC Educational Resources Information Center

    Marchionda, Denise

    2010-01-01

    Students often view their course grades as a mysterious equation of teacher-given grades, teacher-given grace, and some other ethereal components based on luck. However, giving students the power to earn points based on numerous daily/weekly assignments and attendance makes the grading process objective and personal, freeing the instructor to…

  5. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    NASA Astrophysics Data System (ADS)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models, assessed for accuracy, and analyzed in Geographic Information System software. Student projects have included mapping and analyzing landslide morphology, fault scarps, and earthquake ground surface rupture. Students have praised the geospatial skills they learn, whereas helping them stay on schedule to finish their projects is a challenge.

  6. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.

  7. 40 CFR 63.600 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... wet-process phosphoric acid process line: reactors, filters, evaporators, and hot wells; (2) Each... following emission points which are components of a superphosphoric acid process line: evaporators, hot...

  8. Realization of the Temperature Scale in the Range from 234.3 K (Hg Triple Point) to 1084.62°C (Cu Freezing Point) in Croatia

    NASA Astrophysics Data System (ADS)

    Zvizdic, Davor; Veliki, Tomislav; Grgec Bermanec, Lovorka

    2008-06-01

    This article describes the realization of the International Temperature Scale in the range from 234.3 K (mercury triple point) to 1084.62°C (copper freezing point) at the Laboratory for Process Measurement (LPM), Faculty of Mechanical Engineering and Naval Architecture (FSB), University of Zagreb. The system for the realization of the ITS-90 consists of the sealed fixed-point cells (mercury triple point, water triple point and gallium melting point) and the apparatus designed for the optimal realization of open fixed-point cells which include the gallium melting point, tin freezing point, zinc freezing point, aluminum freezing point, and copper freezing point. The maintenance of the open fixed-point cells is described, including the system for filling the cells with pure argon and for maintaining the pressure during the realization.

  9. Identifying, Assessing, and Mitigating Risk of Single-Point Inspections on the Space Shuttle Reusable Solid Rocket Motor

    NASA Technical Reports Server (NTRS)

    Greenhalgh, Phillip O.

    2004-01-01

    In the production of each Space Shuttle Reusable Solid Rocket Motor (RSRM), over 100,000 inspections are performed. ATK Thiokol Inc. reviewed these inspections to ensure a robust inspection system is maintained. The principal effort within this endeavor was the systematic identification and evaluation of inspections considered to be single-point. Single-point inspections are those accomplished on components, materials, and tooling by only one person, involving no other check. The purpose was to more accurately characterize risk and ultimately address and/or mitigate risk associated with single-point inspections. After the initial review of all inspections and identification/assessment of single-point inspections, review teams applied risk prioritization methodology similar to that used in a Process Failure Modes Effects Analysis to derive a Risk Prioritization Number for each single-point inspection. After the prioritization of risk, all single-point inspection points determined to have significant risk were provided either with risk-mitigating actions or rationale for acceptance. This effort gave confidence to the RSRM program that the correct inspections are being accomplished, that there is appropriate justification for those that remain as single-point inspections, and that risk mitigation was applied to further reduce risk of higher risk single-point inspections. This paper examines the process, results, and lessons learned in identifying, assessing, and mitigating risk associated with single-point inspections accomplished in the production of the Space Shuttle RSRM.

  10. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  11. Automatic Reconstruction of 3D Building Models from Terrestrial Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    El Meouche, R.; Rezoug, M.; Hijazi, I.; Maes, D.

    2013-11-01

    With modern 3D laser scanners we can acquire a large amount of 3D data in only a few minutes. This technology results in a growing number of applications ranging from the digitalization of historical artifacts to facial authentication. The modeling process demands a lot of time and work (Tim Volodine, 2007). In comparison with the other two stages, the acquisition and the registration, the degree of automation of the modeling stage is almost zero. In this paper, we propose a new surface reconstruction technique for buildings to process the data obtained by a 3D laser scanner. These data are called a point cloud which is a collection of points sampled from the surface of a 3D object. Such a point cloud can consist of millions of points. In order to work more efficiently, we worked with simplified models which contain less points and so less details than a point cloud obtained in situ. The goal of this study was to facilitate the modeling process of a building starting from 3D laser scanner data. In order to do this, we wrote two scripts for Rhinoceros 5.0 based on intelligent algorithms. The first script finds the exterior outline of a building. With a minimum of human interaction, there is a thin box drawn around the surface of a wall. This box is able to rotate 360° around an axis in a corner of the wall in search for the points of other walls. In this way we can eliminate noise points. These are unwanted or irrelevant points. If there is an angled roof, the box can also turn around the edge of the wall and the roof. With the different positions of the box we can calculate the exterior outline. The second script draws the interior outline in a surface of a building. By interior outline we mean the outline of the openings like windows or doors. This script is based on the distances between the points and vector characteristics. Two consecutive points with a relative big distance will form the outline of an opening. Once those points are found, the interior outline can be drawn. The designed scripts are able to ensure for simple point clouds: the elimination of almost all noise points and the reconstruction of a CAD model.

  12. Inference from clustering with application to gene-expression microarrays.

    PubMed

    Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M

    2002-01-01

    There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.

  13. Magnetic topological analysis of coronal bright points

    NASA Astrophysics Data System (ADS)

    Galsgaard, K.; Madjarska, M. S.; Moreno-Insertis, F.; Huang, Z.; Wiegelmann, T.

    2017-10-01

    Context. We report on the first of a series of studies on coronal bright points which investigate the physical mechanism that generates these phenomena. Aims: The aim of this paper is to understand the magnetic-field structure that hosts the bright points. Methods: We use longitudinal magnetograms taken by the Solar Optical Telescope with the Narrowband Filter Imager. For a single case, magnetograms from the Helioseismic and Magnetic Imager were added to the analysis. The longitudinal magnetic field component is used to derive the potential magnetic fields of the large regions around the bright points. A magneto-static field extrapolation method is tested to verify the accuracy of the potential field modelling. The three dimensional magnetic fields are investigated for the presence of magnetic null points and their influence on the local magnetic domain. Results: In nine out of ten cases the bright point resides in areas where the coronal magnetic field contains an opposite polarity intrusion defining a magnetic null point above it. We find that X-ray bright points reside, in these nine cases, in a limited part of the projected fan-dome area, either fully inside the dome or expanding over a limited area below which typically a dominant flux concentration resides. The tenth bright point is located in a bipolar loop system without an overlying null point. Conclusions: All bright points in coronal holes and two out of three bright points in quiet Sun regions are seen to reside in regions containing a magnetic null point. An as yet unidentified process(es) generates the brigh points in specific regions of the fan-dome structure. The movies are available at http://www.aanda.org

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hourdequin, Marion, E-mail: Marion.Hourdequin@ColoradoCollege.edu; Department of Philosophy, Colorado College, 14 E. Cache La Poudre St., Colorado Springs, CO 80903; Landres, Peter

    Traditional mechanisms for public participation in environmental impact assessment under U.S. federal law have been criticized as ineffective and unable to resolve conflict. As these mechanisms are modified and new approaches developed, we argue that participation should be designed and evaluated not only on practical grounds of cost-effectiveness and efficiency, but also on ethical grounds based on democratic ideals. In this paper, we review and synthesize modern democratic theory to develop and justify four ethical principles for public participation: equal opportunity to participate, equal access to information, genuine deliberation, and shared commitment. We then explore several tensions that are inherentmore » in applying these ethical principles to public participation in EIA. We next examine traditional NEPA processes and newer collaborative approaches in light of these principles. Finally, we explore the circumstances that argue for more in-depth participatory processes. While improved EIA participatory processes do not guarantee improved outcomes in environmental management, processes informed by these four ethical principles derived from democratic theory may lead to increased public engagement and satisfaction with government agency decisions. - Highlights: Black-Right-Pointing-Pointer Four ethical principles based on democratic theory for public participation in EIA. Black-Right-Pointing-Pointer NEPA and collaboration offer different strengths in meeting these principles. Black-Right-Pointing-Pointer We explore tensions inherent in applying these principles. Black-Right-Pointing-Pointer Improved participatory processes may improve public acceptance of agency decisions.« less

  15. Characterisation of titanium-titanium boride composites processed by powder metallurgy techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Selva Kumar, M., E-mail: sel_mcet@yahoo.co.in; Chandrasekar, P.; Chandramohan, P.

    2012-11-15

    In this work, a detailed characterisation of titanium-titanium boride composites processed by three powder metallurgy techniques, namely, hot isostatic pressing, spark plasma sintering and vacuum sintering, was conducted. Two composites with different volume percents of titanium boride reinforcement were used for the investigation. One was titanium with 20% titanium boride, and the other was titanium with 40% titanium boride (by volume). Characterisation was performed using X-ray diffraction, electron probe micro analysis - energy dispersive spectroscopy and wavelength dispersive spectroscopy, image analysis and scanning electron microscopy. The characterisation results confirm the completion of the titanium boride reaction. The results reveal themore » presence of titanium boride reinforcement in different morphologies such as needle-shaped whiskers, short agglomerated whiskers and fine plates. The paper also discusses how mechanical properties such as microhardness, elastic modulus and Poisson's ratio are influenced by the processing techniques as well as the volume fraction of the titanium boride reinforcement. - Highlights: Black-Right-Pointing-Pointer Ti-TiB composites were processed by HIP, SPS and vacuum sintering. Black-Right-Pointing-Pointer The completion of Ti-TiB{sub 2} reaction was confirmed by XRD, SEM and EPMA studies. Black-Right-Pointing-Pointer Hardness and elastic properties of Ti-TiB composites were discussed. Black-Right-Pointing-Pointer Processing techniques were compared with respect to their microstructure.« less

  16. Online tracking of instantaneous frequency and amplitude of dynamical system response

    NASA Astrophysics Data System (ADS)

    Frank Pai, P.

    2010-05-01

    This paper presents a sliding-window tracking (SWT) method for accurate tracking of the instantaneous frequency and amplitude of arbitrary dynamic response by processing only three (or more) most recent data points. Teager-Kaiser algorithm (TKA) is a well-known four-point method for online tracking of frequency and amplitude. Because finite difference is used in TKA, its accuracy is easily destroyed by measurement and/or signal-processing noise. Moreover, because TKA assumes the processed signal to be a pure harmonic, any moving average in the signal can destroy the accuracy of TKA. On the other hand, because SWT uses a constant and a pair of windowed regular harmonics to fit the data and estimate the instantaneous frequency and amplitude, the influence of any moving average is eliminated. Moreover, noise filtering is an implicit capability of SWT when more than three data points are used, and this capability increases with the number of processed data points. To compare the accuracy of SWT and TKA, Hilbert-Huang transform is used to extract accurate time-varying frequencies and amplitudes by processing the whole data set without assuming the signal to be harmonic. Frequency and amplitude trackings of different amplitude- and frequency-modulated signals, vibrato in music, and nonlinear stationary and non-stationary dynamic signals are studied. Results show that SWT is more accurate, robust, and versatile than TKA for online tracking of frequency and amplitude.

  17. Usability Evaluation at the Point-of-Care: A Method to Identify User Information Needs in CPOE Applications

    PubMed Central

    Washburn, Jeff; Fiol, Guilherme Del; Rocha, Roberto A.

    2006-01-01

    Point of care usability evaluation may help identify information needs that occur during the process of providing care. We describe the process of using usability-specific recording software to record Computerized Physician Order Entry (CPOE) ordering sessions on admitted adult and pediatric patients at two urban tertiary hospitals in the Intermountain Healthcare system of hospitals. PMID:17238756

  18. Microphysical Processes Affecting the Pinatubo Volcanic Plume

    NASA Technical Reports Server (NTRS)

    Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia

    1996-01-01

    In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.

  19. Predicting wildfire occurrence distribution with spatial point process models and its uncertainty assessment: a case study in the Lake Tahoe Basin, USA

    Treesearch

    Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner

    2015-01-01

    Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...

  20. Minimizing corrosion in coal liquid distillation

    DOEpatents

    Baumert, Kenneth L.; Sagues, Alberto A.; Davis, Burtron H.

    1985-01-01

    In an atmospheric distillation tower of a coal liquefaction process, tower materials corrosion is reduced or eliminated by introduction of boiling point differentiated streams to boiling point differentiated tower regions.

  1. Magnetic drops in a soft-magnetic cylinder

    NASA Astrophysics Data System (ADS)

    Hertel, Riccardo; Kirschner, Jürgen

    2004-07-01

    Magnetization reversal in a cylindrical ferromagnetic particle seems to be a simple textbook problem in magnetism. But at a closer look, the magnetization reversal dynamics in a cylinder is far from being trivial. The difficulty arises from the central axis, where the magnetization switches in a discontinuous fashion. Micromagnetic computer simulations allow for a detailed description of the evolution of the magnetic structure on the sub-nanosecond time scale. The switching process involves the injection of a magnetic point singularity (Bloch point) into the cylinder. Further point singularities may be generated and annihilated periodically during the reversal process. This results in the temporary formation of micromagnetic drops, i.e., isolated, non-reversed regions. This surprising feature in dynamic micromagnetism is due to different mobilities of domain wall and Bloch point.

  2. Experimental research and numerical optimisation of multi-point sheet metal forming implementation using a solid elastic cushion system

    NASA Astrophysics Data System (ADS)

    Tolipov, A. A.; Elghawail, A.; Shushing, S.; Pham, D.; Essa, K.

    2017-09-01

    There is a growing demand for flexible manufacturing techniques that meet the rapid changes in customer needs. A finite element analysis numerical optimisation technique was used to optimise the multi-point sheet forming process. Multi-point forming (MPF) is a flexible sheet metal forming technique where the same tool can be readily changed to produce different parts. The process suffers from some geometrical defects such as wrinkling and dimpling, which have been found to be the cause of the major surface quality problems. This study investigated the influence of parameters such as the elastic cushion hardness, blank holder force, coefficient of friction, cushion thickness and radius of curvature, on the quality of parts formed in a flexible multi-point stamping die. For those reasons, in this investigation, a multipoint forming stamping process using a blank holder was carried out in order to study the effects of the wrinkling, dimpling, thickness variation and forming force. The aim was to determine the optimum values of these parameters. Finite element modelling (FEM) was employed to simulate the multi-point forming of hemispherical shapes. Using the response surface method, the effects of process parameters on wrinkling, maximum deviation from the target shape and thickness variation were investigated. The results show that elastic cushion with proper thickness and polyurethane with the hardness of Shore A90. It has also been found that the application of lubrication cans improve the shape accuracy of the formed workpiece. These final results were compared with the numerical simulation results of the multi-point forming for hemispherical shapes using a blank-holder and it was found that using cushion hardness realistic to reduce wrinkling and maximum deviation.

  3. Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces

    NASA Astrophysics Data System (ADS)

    Theiler, P. W.; Schindler, K.

    2012-07-01

    Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.

  4. Theoretical model of dynamic spin polarization of nuclei coupled to paramagnetic point defects in diamond and silicon carbide

    NASA Astrophysics Data System (ADS)

    Ivády, Viktor; Szász, Krisztián; Falk, Abram L.; Klimov, Paul V.; Christle, David J.; Janzén, Erik; Abrikosov, Igor A.; Awschalom, David D.; Gali, Adam

    2015-09-01

    Dynamic nuclear spin polarization (DNP) mediated by paramagnetic point defects in semiconductors is a key resource for both initializing nuclear quantum memories and producing nuclear hyperpolarization. DNP is therefore an important process in the field of quantum-information processing, sensitivity-enhanced nuclear magnetic resonance, and nuclear-spin-based spintronics. DNP based on optical pumping of point defects has been demonstrated by using the electron spin of nitrogen-vacancy (NV) center in diamond, and more recently, by using divacancy and related defect spins in hexagonal silicon carbide (SiC). Here, we describe a general model for these optical DNP processes that allows the effects of many microscopic processes to be integrated. Applying this theory, we gain a deeper insight into dynamic nuclear spin polarization and the physics of diamond and SiC defects. Our results are in good agreement with experimental observations and provide a detailed and unified understanding. In particular, our findings show that the defect electron spin coherence times and excited state lifetimes are crucial factors in the entire DNP process.

  5. A new approach to criteria for health risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spickett, Jeffery, E-mail: J.Spickett@curtin.edu.au; Faculty of Health Sciences, School of Public Health, Curtin University, Perth, Western Australia; Katscherian, Dianne

    2012-01-15

    Health Impact Assessment (HIA) is a developing component of the overall impact assessment process and as such needs access to procedures that can enable more consistent approaches to the stepwise process that is now generally accepted in both EIA and HIA. The guidelines developed during this project provide a structured process, based on risk assessment procedures which use consequences and likelihood, as a way of ranking risks to adverse health outcomes from activities subjected to HIA or HIA as part of EIA. The aim is to assess the potential for both acute and chronic health outcomes. The consequences component alsomore » identifies a series of consequences for the health care system, depicted as expressions of financial expenditure and the capacity of the health system. These more specific health risk assessment characteristics should provide for a broader consideration of health consequences and a more consistent estimation of the adverse health risks of a proposed development at both the scoping and risk assessment stages of the HIA process. - Highlights: Black-Right-Pointing-Pointer A more objective approach to health risk assessment is provided. Black-Right-Pointing-Pointer An objective set of criteria for the consequences for chronic and acute impacts. Black-Right-Pointing-Pointer An objective set of criteria for the consequences on the health care system. Black-Right-Pointing-Pointer An objective set of criteria for event frequency that could impact on health. Black-Right-Pointing-Pointer The approach presented is currently being trialled in Australia.« less

  6. Aquisição fonológica do português brasileiro por crianças ouvintes bilíngues bimodais e surdas usuárias de implante coclear

    PubMed Central

    Cruz, Carina Rebello; Finger, Ingrid

    2014-01-01

    Resumo O presente estudo investiga a aquisição fonológica do Português Brasileiro (PB) por 24 crianças ouvintes bilíngues bimodais, com acesso irrestrito à Língua Brasileira de Sinais (Libras), e por 6 crianças surdas que utilizam implante coclear (IC), com acesso restrito ou irrestrito à Libras. Para a avaliação do sistema fonológico das crianças em PB, foi utilizada a Parte A, Prova de Nomeação, do ABFW – Teste de Linguagem Infantil (ANDRADE et al. 2004). Os resultados revelaram que as crianças ouvintes bilíngues bimodais e a criança surda usuária de IC com acesso irrestrito à Libras apresentaram processo de aquisição fonológica esperada (normal) para a sua faixa etária. Considera-se que a aquisição precoce e o acesso irrestrito à Libras podem ter sido determinantes para o desempenho dessas crianças no teste oral utilizado. PMID:25506105

  7. CHOQUES AGREGADOS E INVERSIÓN EN CAPITAL HUMANO: EL LOGRO EDUCATIVO SUPERIOR DURANTE LA DÉCADA PERDIDA EN MÉXICO

    PubMed Central

    Peña, Pablo A.

    2014-01-01

    Este artículo documenta una respuesta agregada negativa del logro educativo superior (más de 12 años de escolaridad) en México a la recesión de 1982–83 y el estancamiento que le siguió. La respuesta no fue homogénea entre géneros, regiones y entornos familiares. Los hombres experimentaron una caída en el logro mientras que las mujeres experimentaron un crecimiento más lento. En promedio, los estados con un mayor logro antes del choque experimentaron mayores caídas. La respuesta entre distintos entornos familiares no presenta un patrón claro. Sin embargo, el efecto negativo en el logro se observa incluso entre hermanos. La evidencia sugiere una historia por el lado de la demanda: la caída en el ingreso de los hogares parece ser el determinante de la caída/desaceleración del logro educativo superior. La conclusión es que la recesión y la falta de crecimiento que le siguió tuvieron un efecto negativo importante y duradero en la formación de capacidades en México. PMID:25328251

  8. Gasto catastrófico en salud en México y sus factores determinantes, 2002-2014.

    PubMed

    Rodríguez-Aguilar, Román; Rivera-Peña, Gustavo

    2017-01-01

    To assess the financial protection of public health insurance by analyzing the percentage of households with catastrophic health expenditure (HCHE) in Mexico and its relationship with poverty status, size of locality, federal entity, insurance status and items of health spending. Mexican National Survey of Income and Expenditures 2002-2014 was used to estimate the percentage of HCHE. Through a probit model, factors associated with the occurrence of catastrophic spending are identified. Analysis was performed using Stata-SE 12. In 2014 there were 2.08% of HCHE (1.82-2.34%; N = 657,474). The estimated probit model correctly classified 98.2% of HCHE (Pr (D) ≥ 0.5). Factors affecting the catastrophic expenditures were affiliation, presence of chronic disease, hospitalization expenditure, rural condition and that the household is below the food poverty line. The percentage of HCHE decreased in recent years, improving financial protection in health. This decline seems to have stalled, keeping inequities in access to health services, especially in rural population without affiliation to any health institution, below the food poverty line and suffering from chronic diseases. Copyright: © 2017 SecretarÍa de Salud

  9. Device and method for determining freezing points

    NASA Technical Reports Server (NTRS)

    Mathiprakasam, Balakrishnan (Inventor)

    1986-01-01

    A freezing point method and device (10) are disclosed. The method and device pertain to an inflection point technique for determining the freezing points of mixtures. In both the method and device (10), the mixture is cooled to a point below its anticipated freezing point and then warmed at a substantially linear rate. During the warming process, the rate of increase of temperature of the mixture is monitored by, for example, thermocouple (28) with the thermocouple output signal being amplified and differentiated by a differentiator (42). The rate of increase of temperature data are analyzed and a peak rate of increase of temperature is identified. In the preferred device (10) a computer (22) is utilized to analyze the rate of increase of temperature data following the warming process. Once the maximum rate of increase of temperature is identified, the corresponding temperature of the mixture is located and earmarked as being substantially equal to the freezing point of the mixture. In a preferred device (10), the computer (22), in addition to collecting the temperature and rate of change of temperature data, controls a programmable power supply (14) to provide a predetermined amount of cooling and warming current to thermoelectric modules (56).

  10. Testing single point incremental forming molds for thermoforming operations

    NASA Astrophysics Data System (ADS)

    Afonso, Daniel; de Sousa, Ricardo Alves; Torcato, Ricardo

    2016-10-01

    Low pressure polymer processing processes as thermoforming or rotational molding use much simpler molds then high pressure processes like injection. However, despite the low forces involved with the process, molds manufacturing for this operations is still a very material, energy and time consuming operation. The goal of the research is to develop and validate a method for manufacturing plastically formed sheets metal molds by single point incremental forming (SPIF) operation for thermoforming operation. Stewart platform based SPIF machines allow the forming of thick metal sheets, granting the required structural stiffness for the mold surface, and keeping the short lead time manufacture and low thermal inertia.

  11. Monitoring urban subsidence based on SAR lnterferometric point target analysis

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Gong, W.; Lu, Z.

    2009-01-01

    lnterferometric point target analysis (IPTA) is one of the latest developments in radar interferometric processing. It is achieved by analysis of the interferometric phases of some individual point targets, which are discrete and present temporarily stable backscattering characteristics, in long temporal series of interferometric SAR images. This paper analyzes the interferometric phase model of point targets, and then addresses two key issues within IPTA process. Firstly, a spatial searching method is proposed to unwrap the interferometric phase difference between two neighboring point targets. The height residual error and linear deformation rate of each point target can then be calculated, when a global reference point with known height correction and deformation history is chosen. Secondly, a spatial-temporal filtering scheme is proposed to further separate the atmosphere phase and nonlinear deformation phase from the residual interferometric phase. Finally, an experiment of the developed IPTA methodology is conducted over Suzhou urban area. Totally 38 ERS-1/2 SAR scenes are analyzed, and the deformation information over 3 546 point targets in the time span of 1992-2002 are generated. The IPTA-derived deformation shows very good agreement with the published result, which demonstrates that the IPTA technique can be developed into an operational tool to map the ground subsidence over urban area.

  12. Plates for vacuum thermal fusion

    DOEpatents

    Davidson, James C.; Balch, Joseph W.

    2002-01-01

    A process for effectively bonding arbitrary size or shape substrates. The process incorporates vacuum pull down techniques to ensure uniform surface contact during the bonding process. The essence of the process for bonding substrates, such as glass, plastic, or alloys, etc., which have a moderate melting point with a gradual softening point curve, involves the application of an active vacuum source to evacuate interstices between the substrates while at the same time providing a positive force to hold the parts to be bonded in contact. This enables increasing the temperature of the bonding process to ensure that the softening point has been reached and small void areas are filled and come in contact with the opposing substrate. The process is most effective where at least one of the two plates or substrates contain channels or grooves that can be used to apply vacuum between the plates or substrates during the thermal bonding cycle. Also, it is beneficial to provide a vacuum groove or channel near the perimeter of the plates or substrates to ensure bonding of the perimeter of the plates or substrates and reduce the unbonded regions inside the interior region of the plates or substrates.

  13. Quality of care for elderly patients hospitalized for pneumonia in the United States, 2006 to 2010.

    PubMed

    Lee, Jonathan S; Nsa, Wato; Hausmann, Leslie R M; Trivedi, Amal N; Bratzler, Dale W; Auden, Dana; Mor, Maria K; Baus, Kristie; Larbi, Fiona M; Fine, Michael J

    2014-11-01

    Nearly every US acute care hospital reports publicly on adherence to recommended processes of care for patients hospitalized with pneumonia. However, it remains uncertain how much performance of these process measures has improved over time or whether performance is associated with superior patient outcomes. To describe trends in processes of care, mortality, and readmission for elderly patients hospitalized for pneumonia and to assess the independent associations between processes and outcomes of care. Retrospective cohort study conducted from January 1, 2006, to December 31, 2010, at 4740 US acute care hospitals. The cohort included 1 818 979 cases of pneumonia in elderly (≥65 years), Medicare fee-for-service patients who were eligible for at least 1 of 7 pneumonia inpatient processes of care tracked by the Centers for Medicare & Medicaid Services (CMS). Annual performance rates for 7 pneumonia processes of care and an all-or-none composite of these measures; and 30-day, all-cause mortality and hospital readmission, adjusted for patient and hospital characteristics. Adjusted annual performance rates for all 7 CMS processes of care (expressed in percentage points per year) increased significantly from 2006 to 2010, ranging from 1.02 for antibiotic initiation within 6 hours to 5.30 for influenza vaccination (P < .001). All 7 measures were performed in more than 92% of eligible cases in 2010. The all-or-none composite demonstrated the largest adjusted relative increase over time (6.87 percentage points per year; P < .001) and was achieved in 87.4% of cases in 2010. Adjusted annual mortality decreased by 0.09 percentage points per year (P < .001), driven primarily by decreasing mortality in the subgroup not treated in the intensive care unit (ICU) (-0.18 percentage points per year; P < .001). Adjusted annual readmission rates decreased significantly by 0.25 percentage points per year (P < .001). All 7 processes of care were independently associated with reduced 30-day mortality, and 5 were associated with reduced 30-day readmission. Performance of processes of care for elderly patients hospitalized for pneumonia improved substantially from 2006 to 2010. Adjusted 30-day mortality declined slightly over time primarily owing to improved survival among non-ICU patients, and all individual processes of care were independently associated with reduced mortality.

  14. Comparison of different procedures to stabilize biogas formation after process failure in a thermophilic waste digestion system: Influence of aggregate formation on process stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleyboecker, A.; Liebrich, M.; Kasina, M.

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Mechanism of process recovery with calcium oxide. Black-Right-Pointing-Pointer Formation of insoluble calcium salts with long chain fatty acids and phosphate. Black-Right-Pointing-Pointer Adsorption of VFAs by the precipitates resulting in the formation of aggregates. Black-Right-Pointing-Pointer Acid uptake and phosphate release by the phosphate-accumulating organisms. Black-Right-Pointing-Pointer Microbial degradation of volatile fatty acids in the aggregates. - Abstract: Following a process failure in a full-scale biogas reactor, different counter measures were undertaken to stabilize the process of biogas formation, including the reduction of the organic loading rate, the addition of sodium hydroxide (NaOH), and the introduction of calcium oxide (CaO). Correspondingmore » to the results of the process recovery in the full-scale digester, laboratory experiments showed that CaO was more capable of stabilizing the process than NaOH. While both additives were able to raise the pH to a neutral milieu (pH > 7.0), the formation of aggregates was observed particularly when CaO was used as the additive. Scanning electron microscopy investigations revealed calcium phosphate compounds in the core of the aggregates. Phosphate seemed to be released by phosphorus-accumulating organisms, when volatile fatty acids accumulated. The calcium, which was charged by the CaO addition, formed insoluble salts with long chain fatty acids, and caused the precipitation of calcium phosphate compounds. These aggregates were surrounded by a white layer of carbon rich organic matter, probably consisting of volatile fatty acids. Thus, during the process recovery with CaO, the decrease in the amount of accumulated acids in the liquid phase was likely enabled by (1) the formation of insoluble calcium salts with long chain fatty acids, (2) the adsorption of volatile fatty acids by the precipitates, (3) the acid uptake by phosphorus-accumulating organisms and (4) the degradation of volatile fatty acids in the aggregates. Furthermore, this mechanism enabled a stable process performance after re-activation of biogas production. In contrast, during the counter measure with NaOH aggregate formation was only minor resulting in a rapid process failure subsequent the increase of the organic loading rate.« less

  15. Empirical comparison of heuristic load distribution in point-to-point multicomputer networks

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.

    1990-01-01

    The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.

  16. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  17. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  18. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  19. Knowledge brokerage - potential for increased capacities and shared power in impact assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosario Partidario, Maria, E-mail: mrp@civil.ist.utl.pt; Sheate, William R., E-mail: w.sheate@imperial.ac.uk; Collingwood Environmental Planning Ltd, London, 1E, The Chandlery, 50 Westminster Bridge Road, London SE1 7QY

    2013-02-15

    Constructive and collaborative planning theory has exposed the perceived limitations of public participation in impact assessment. At strategic levels of assessment the established norm can be misleading and practice is illusive. For example, debates on SEA effectiveness recognize insufficiencies, but are often based on questionable premises. The authors of this paper argue that public participation in strategic assessment requires new forms of information and engagement, consistent with the complexity of the issues at these levels and that strategic assessments can act as knowledge brokerage instruments with the potential to generate more participative environments and attitudes. The paper explores barriers andmore » limitations, as well as the role of knowledge brokerage in stimulating the engagement of the public, through learning-oriented processes and responsibility sharing in more participative models of governance. The paper concludes with a discussion on building and inter-change of knowledge, towards creative solutions to identified problems, stimulating learning processes, largely beyond simple information transfer mechanisms through consultative processes. The paper argues fundamentally for the need to conceive strategic assessments as learning platforms and design knowledge brokerage opportunities explicitly as a means to enhance learning processes and power sharing in IA. - Highlights: Black-Right-Pointing-Pointer Debates on SEA recognize insufficiencies on public participation Black-Right-Pointing-Pointer We propose new forms of engagement consistent with complex situations at strategic levels of decision-making Black-Right-Pointing-Pointer Constructive and collaborative planning theories help explain how different actors acquire knowledge and the value of knowledge exchange Black-Right-Pointing-Pointer Strategic assessments can act as knowledge brokerage instruments Black-Right-Pointing-Pointer The paper argues for strategic assessments as learning platforms as a means to enhance learning processes and power sharing in IA.« less

  20. Grid point extraction and coding for structured light system

    NASA Astrophysics Data System (ADS)

    Song, Zhan; Chung, Ronald

    2011-09-01

    A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.

  1. Talking Points: Discussion Activities in the Primary Classroom

    ERIC Educational Resources Information Center

    Dawes, Lyn

    2011-01-01

    "Talking Points: Discussion Activities in the Primary Classroom" encourages and supports classroom discussion on a range of topics, enabling children to develop the important life-skill of effective group communication. Children who can explain their own ideas and take account of the points of view and reasons of others are in the process of…

  2. Fixed point theorems and dissipative processes

    NASA Technical Reports Server (NTRS)

    Hale, J. K.; Lopes, O.

    1972-01-01

    The deficiencies of the theories that characterize the maximal compact invariant set of T as asymptotically stable, and that some iterate of T has a fixed point are discussed. It is shown that this fixed point condition is always satisfied for condensing and local dissipative T. Applications are given to a class of neutral functional differential equations.

  3. Improving the Patron Experience: Sterling Memorial Library's Single Service Point

    ERIC Educational Resources Information Center

    Sider, Laura Galas

    2016-01-01

    This article describes the planning process and implementation of a single service point at Yale University's Sterling Memorial Library. While much recent scholarship on single service points (SSPs) has focused on the virtues or hazards of eliminating reference desks in libraries nationwide, this essay explores the ways in which single service…

  4. Ultra-Wideband Time-Difference-of-Arrival Two-Point-Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun David; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2009-01-01

    A UWB TDOA Two-Point-Tracking System has been conceived and developed at JSC. This system can provide sub-inch tracking capability of two points on one target. This capability can be applied to guide a docking process in a 2D space. Lab tests demonstrate the feasibility of this technology.

  5. Growth Points in Linking Representations of Function: A Research-Based Framework

    ERIC Educational Resources Information Center

    Ronda, Erlina

    2015-01-01

    This paper describes five growth points in linking representations of function developed from a study of secondary school learners. Framed within the cognitivist perspective and process-object conception of function, the growth points were identified and described based on linear and quadratic function tasks learners can do and their strategies…

  6. Morphological Effects in Auditory Word Recognition: Evidence from Danish

    ERIC Educational Resources Information Center

    Balling, Laura Winther; Baayen, R. Harald

    2008-01-01

    In this study, we investigate the processing of morphologically complex words in Danish using auditory lexical decision. We document a second critical point in auditory comprehension in addition to the Uniqueness Point (UP), namely the point at which competing morphological continuation forms of the base cease to be compatible with the input,…

  7. A Space-Based Point Design for Global Coherent Doppler Wind Lidar Profiling Matched to the Recent NASA/NOAA Draft Science Requirements

    NASA Technical Reports Server (NTRS)

    Kavaya, Michael J.; Emmitt, G. David; Frehlich, Rod G.; Amzajerdian, Farzin; Singh, Upendra N.

    2002-01-01

    An end-to-end point design, including lidar, orbit, scanning, atmospheric, and data processing parameters, for space-based global profiling of atmospheric wind will be presented. The point design attempts to match the recent NASA/NOAA draft science requirements for wind measurement.

  8. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  9. Gambling scores for earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  10. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  11. The near real time image navigation of pictures returned by Voyager 2 at Neptune

    NASA Technical Reports Server (NTRS)

    Underwood, Ian M.; Bachman, Nathaniel J.; Taber, William L.; Wang, Tseng-Chan; Acton, Charles H.

    1990-01-01

    The development of a process for performing image navigation in near real time is described. The process was used to accurately determine the camera pointing for pictures returned by the Voyager 2 spacecraft at Neptune Encounter. Image navigation improves knowledge of the pointing of an imaging instrument at a particular epoch by correlating the spacecraft-relative locations of target bodies in inertial space with the locations of their images in a picture taken at that epoch. More than 8,500 pictures returned by Voyager 2 at Neptune were processed in near real time. The results were used in several applications, including improving pointing knowledge for nonimaging instruments ('C-smithing'), making 'Neptune, the Movie', and providing immediate access to geometrical quantities similar to those traditionally supplied in the Supplementary Experiment Data Record.

  12. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  13. Process Mechanics Analysis in Single Point Incremental Forming

    NASA Astrophysics Data System (ADS)

    Ambrogio, G.; Filice, L.; Fratini, L.; Micari, F.

    2004-06-01

    The request of highly differentiated products and the need of process flexibility have brought the researchers to focus the attention on innovative sheet forming processes. Industrial application of conventional processes is, in fact, economically convenient just for large scale productions; furthermore conventional processes do not allow to fully satisfy the mentioned demand of flexibility. In this contest, single point incremental forming (SPIF) is an innovative and flexible answer to market requests. The process is characterized by a peculiar process mechanics, being the sheet plastically deformed only through a localised stretching mechanism. Some recent experimental studies have shown that SPIF permits a relevant increase of formability limits, just as a consequence of the peculiar deformation mechanics. The research here addressed is focused on the theoretical investigation of process mechanics; the aim was to achieve a deeper understanding of basic phenomena involved in SPIF which justify the above mentioned formability enhancing.

  14. Update on CERN Search based on SharePoint 2013

    NASA Astrophysics Data System (ADS)

    Alvarez, E.; Fernandez, S.; Lossent, A.; Posada, I.; Silva, B.; Wagner, A.

    2017-10-01

    CERN’s enterprise Search solution “CERN Search” provides a central search solution for users and CERN service providers. A total of about 20 million public and protected documents from a wide range of document collections is indexed, including Indico, TWiki, Drupal, SharePoint, JACOW, E-group archives, EDMS, and CERN Web pages. In spring 2015, CERN Search was migrated to a new infrastructure based on SharePoint 2013. In the context of this upgrade, the document pre-processing and indexing process was redesigned and generalised. The new data feeding framework allows to profit from new functionality and it facilitates the long term maintenance of the system.

  15. A Novel Real-Time Reference Key Frame Scan Matching Method.

    PubMed

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-05-07

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.

  16. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  17. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  18. The Evaluation of HOMER as a Marine Corps Expeditionary Energy Predeployment Tool

    DTIC Science & Technology

    2010-09-01

    experiment was used to ensure the HOMER models were accurate. Following the calibration, the concept of expeditionary energy density as it pertains to power ...Brigade–Afghanistan xvi MEP Mobile Electric Power MPP Maximum Power Point MPPT Maximum Power Point Tracker NASA National Aeronautics and...process was used to analyze HOMER’s modeling capability: • Conduct photovoltaic (PV) experiment, • Develop a calibration process to match the HOMER

  19. The Evaluation of HOMER as a Marine Corps Expeditionary Energy Pre-deployment Tool

    DTIC Science & Technology

    2010-11-21

    used to ensure the HOMER models were accurate. Following the calibration, the concept of expeditionary energy density as it pertains to power ...MEP Mobile Electric Power MPP Maximum Power Point MPPT Maximum Power Point Tracker NASA National Aeronautics and Space Administration...process was used to analyze HOMER’s modeling capability: • Conduct photovoltaic (PV) experiment, • Develop a calibration process to match the HOMER

  20. Coal Liquefaction desulfurization process

    DOEpatents

    Givens, Edwin N.

    1983-01-01

    In a solvent refined coal liquefaction process, more effective desulfurization of the high boiling point components is effected by first stripping the solvent-coal reacted slurry of lower boiling point components, particularly including hydrogen sulfide and low molecular weight sulfur compounds, and then reacting the slurry with a solid sulfur getter material, such as iron. The sulfur getter compound, with reacted sulfur included, is then removed with other solids in the slurry.

  1. `Counterfactual' interpretation of the quantum measurement process

    NASA Astrophysics Data System (ADS)

    Nisticò, Giuseppe

    1997-08-01

    The question of the determination of the state of the system during a measurement experiment is discussed within quantum theory, as a part of the more general measurement’s problem. I propose a counterfactual interpretation of the measurement process which answers the question from a conceptual point of view. This interpretation turns out to be consistent with the predictions of quantum theory, but it presents difficulties from an operational point of view.

  2. Subjective Well-Being: The Constructionist Point of View. A Longitudinal Study to Verify the Predictive Power of Top-Down Effects and Bottom-Up Processes

    ERIC Educational Resources Information Center

    Leonardi, Fabio; Spazzafumo, Liana; Marcellini, Fiorella

    2005-01-01

    Based on the constructionist point of view applied to Subjective Well-Being (SWB), five hypotheses were advanced about the predictive power of the top-down effects and bottom-up processes over a five years period. The sample consisted of 297 respondents, which represent the Italian sample of a European longitudinal survey; the first phase was…

  3. Treatment of electronic waste to recover metal values using thermal plasma coupled with acid leaching - A response surface modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rath, Swagat S., E-mail: swagat.rath@gmail.com; Nayak, Pradeep; Mukherjee, P.S.

    2012-03-15

    Highlights: Black-Right-Pointing-Pointer Sentences/phrases were modified. Black-Right-Pointing-Pointer Necessary discussions for different figures were included. Black-Right-Pointing-Pointer More discussion have been included on the flue gas analysis. Black-Right-Pointing-Pointer Queries to both the reviewers have been given. - Abstract: The global crisis of the hazardous electronic waste (E-waste) is on the rise due to increasing usage and disposal of electronic devices. A process was developed to treat E-waste in an environmentally benign process. The process consisted of thermal plasma treatment followed by recovery of metal values through mineral acid leaching. In the thermal process, the E-waste was melted to recover the metal values asmore » a metallic mixture. The metallic mixture was subjected to acid leaching in presence of depolarizer. The leached liquor mainly contained copper as the other elements like Al and Fe were mostly in alloy form as per the XRD and phase diagram studies. Response surface model was used to optimize the conditions for leaching. More than 90% leaching efficiency at room temperature was observed for Cu, Ni and Co with HCl as the solvent, whereas Fe and Al showed less than 40% efficiency.« less

  4. Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)

    NASA Technical Reports Server (NTRS)

    Hull, Patrick V.; Canfield, Stephen

    2004-01-01

    The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.

  5. A novel automatic segmentation workflow of axial breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra

    2018-04-01

    In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.

  6. Advanced Mobility Handover for Mobile IPv6 Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an Advanced Mobility Handover scheme (AMH) in this paper for seamless mobility in MIPv6-based wireless networks. In the proposed scheme, the mobile node utilizes a unique home IPv6 address developed to maintain communication with other corresponding nodes without a care-of-address during the roaming process. The IPv6 address for each MN during the first round of AMH process is uniquely identified by HA using the developed MN-ID field as a global permanent, which is identifying uniquely the IPv6 address of MN. Moreover, a temporary MN-ID is generated by access point each time an MN is associated with a particular AP and temporarily saved in a developed table inside the AP. When employing the AMH scheme, the handover process in the network layer is performed prior to its default time. That is, the mobility handover process in the network layer is tackled by a trigger developed AMH message to the next access point. Thus, a mobile node keeps communicating with the current access point while the network layer handover is executed by the next access point. The mathematical analyses and simulation results show that the proposed scheme performs better as compared with the existing approaches. PMID:25614890

  7. Item response theory scoring and the detection of curvilinear relationships.

    PubMed

    Carter, Nathan T; Dalal, Dev K; Guan, Li; LoPilato, Alexander C; Withrow, Scott A

    2017-03-01

    Psychologists are increasingly positing theories of behavior that suggest psychological constructs are curvilinearly related to outcomes. However, results from empirical tests for such curvilinear relations have been mixed. We propose that correctly identifying the response process underlying responses to measures is important for the accuracy of these tests. Indeed, past research has indicated that item responses to many self-report measures follow an ideal point response process-wherein respondents agree only to items that reflect their own standing on the measured variable-as opposed to a dominance process, wherein stronger agreement, regardless of item content, is always indicative of higher standing on the construct. We test whether item response theory (IRT) scoring appropriate for the underlying response process to self-report measures results in more accurate tests for curvilinearity. In 2 simulation studies, we show that, regardless of the underlying response process used to generate the data, using the traditional sum-score generally results in high Type 1 error rates or low power for detecting curvilinearity, depending on the distribution of item locations. With few exceptions, appropriate power and Type 1 error rates are achieved when dominance-based and ideal point-based IRT scoring are correctly used to score dominance and ideal point response data, respectively. We conclude that (a) researchers should be theory-guided when hypothesizing and testing for curvilinear relations; (b) correctly identifying whether responses follow an ideal point versus dominance process, particularly when items are not extreme is critical; and (c) IRT model-based scoring is crucial for accurate tests of curvilinearity. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Comparative study of building footprint estimation methods from LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Rozas, E.; Rivera, F. F.; Cabaleiro, J. C.; Pena, T. F.; Vilariño, D. L.

    2017-10-01

    Building area calculation from LiDAR points is still a difficult task with no clear solution. Their different characteristics, such as shape or size, have made the process too complex to automate. However, several algorithms and techniques have been used in order to obtain an approximated hull. 3D-building reconstruction or urban planning are examples of important applications that benefit of accurate building footprint estimations. In this paper, we have carried out a study of accuracy in the estimation of the footprint of buildings from LiDAR points. The analysis focuses on the processing steps following the object recognition and classification, assuming that labeling of building points have been previously performed. Then, we perform an in-depth analysis of the influence of the point density over the accuracy of the building area estimation. In addition, a set of buildings with different size and shape were manually classified, in such a way that they can be used as benchmark.

  9. Smooth random change point models.

    PubMed

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  11. Obesity prevention at the point of purchase.

    PubMed

    Cohen, D A; Lesser, L I

    2016-05-01

    The point of purchase is when people may make poor and impulsive decisions about what and how much to buy and consume. Because point of purchase strategies frequently work through non-cognitive processes, people are often unable to recognize and resist them. Because people lack insight into how marketing practices interfere with their ability to routinely eat healthy, balanced diets, public health entities should protect consumers from potentially harmful point of purchase strategies. We describe four point of purchase policy options including standardized portion sizes; standards for meals that are sold as a bundle, e.g. 'combo meals'; placement and marketing restrictions on highly processed low-nutrient foods; and explicit warning labels. Adoption of such policies could contribute significantly to the prevention of obesity and diet-related chronic diseases. We also discuss how the policies could be implemented, along with who might favour or oppose them. Many of the policies can be implemented locally, while preserving consumer choice. © 2016 World Obesity.

  12. "From that moment on my life changed": turning points in the healing process for men recovering from child sexual abuse.

    PubMed

    Easton, Scott D; Leone-Sheehan, Danielle M; Sophis, Ellen J; Willis, Danny G

    2015-01-01

    Recent research indicates that child sexual abuse often undermines the health of boys and men across the lifespan. However, some male survivors experience a turning point marking a positive change in their health trajectories and healing process. Although frequently discussed in reference to physical health problems or addictions, very little is known about turning points with respect to child sexual abuse for men. The purpose of this secondary qualitative analysis was to describe the different types of turning points experienced by male survivors who completed the 2010 Health and Well-Being Survey (N = 250). Using conventional content analysis, researchers identified seven types of turning points that were classified into three broad categories: influential relationships (professional and group support, personal relationships), insights and new meanings (cognitive realizations, necessity to change, spiritual transformation), and action-oriented communication (disclosure of CSA, pursuit of justice). Implications for clinical practice and future research are discussed.

  13. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  14. The influence of dew point during annealing on the power loss of electrical steel sheets

    NASA Astrophysics Data System (ADS)

    Broddefalk, Arvid; Jenkins, Keith; Silk, Nick; Lindenmo, Magnus

    Decarburization is a necessary part of the processing of electrical steels if their carbon content is above a certain level. The process is usually carried out in a wet hydrogen-nitrogen atmosphere. Having a high dew point has a negative influence on the power loss, though. This is due to oxidation of the steel, which hinders domain wall motion near the surface. In this study, an increase of the power loss was only observed at a fairly high dew point (>20 °C). It was also only at these high dew points where a subsurface oxide layer was observed. The surfaces of samples with and without this layer were etched in steps. The magnetic properties of the etched samples corresponded well with the expected behavior based on GDOES profiles of the samples.

  15. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  16. Methods and apparatuses for detection of radiation with semiconductor image sensors

    DOEpatents

    Cogliati, Joshua Joseph

    2018-04-10

    A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.

  17. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  18. The Goddard Space Flight Center Program to develop parallel image processing systems

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1972-01-01

    Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.

  19. 40 CFR 409.10 - Applicability; description of the beet sugar processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sugar processing subcategory. 409.10 Section 409.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.10 Applicability; description of the beet sugar processing subcategory. The...

  20. 40 CFR 409.10 - Applicability; description of the beet sugar processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sugar processing subcategory. 409.10 Section 409.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.10 Applicability; description of the beet sugar processing subcategory. The...

  1. 40 CFR 409.10 - Applicability; description of the beet sugar processing subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... sugar processing subcategory. 409.10 Section 409.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.10 Applicability; description of the beet sugar processing subcategory. The...

  2. 40 CFR 409.10 - Applicability; description of the beet sugar processing subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... sugar processing subcategory. 409.10 Section 409.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.10 Applicability; description of the beet sugar processing subcategory. The...

  3. 40 CFR 409.10 - Applicability; description of the beet sugar processing subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... sugar processing subcategory. 409.10 Section 409.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing Subcategory § 409.10 Applicability; description of the beet sugar processing subcategory. The...

  4. 40 CFR 408.170 - Applicability; description of the Alaskan mechanized salmon processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Alaskan mechanized salmon processing subcategory. 408.170 Section 408.170 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Mechanized Salmon Processing Subcategory § 408.170 Applicability; description of the Alaskan mechanized salmon processing subcategory. The provisions of this subpart are...

  5. 40 CFR 408.170 - Applicability; description of the Alaskan mechanized salmon processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Alaskan mechanized salmon processing subcategory. 408.170 Section 408.170 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Mechanized Salmon Processing Subcategory § 408.170 Applicability; description of the Alaskan mechanized salmon processing subcategory. The provisions of this subpart are...

  6. Public Participation Process for Registration Actions

    EPA Pesticide Factsheets

    Describes the process for registration actions which provides the opportunity for the public to comment on major registration decisions at a point in the registration process when comprehensive information and analysis are available.

  7. A neural network strategy for end-point optimization of batch processes.

    PubMed

    Krothapally, M; Palanki, S

    1999-01-01

    The traditional way of operating batch processes has been to utilize an open-loop "golden recipe". However, there can be substantial batch to batch variation in process conditions and this open-loop strategy can lead to non-optimal operation. In this paper, a new approach is presented for end-point optimization of batch processes by utilizing neural networks. This strategy involves the training of two neural networks; one to predict switching times and the other to predict the input profile in the singular region. This approach alleviates the computational problems associated with the classical Pontryagin's approach and the nonlinear programming approach. The efficacy of this scheme is illustrated via simulation of a fed-batch fermentation.

  8. Thermodynamics of a pure substance at the triple point

    NASA Astrophysics Data System (ADS)

    Velasco, S.; Fernández-Pineda, C.

    2007-12-01

    A thermodynamic study of a pure substance at the triple point is presented. In particular, we show that the mass fractions of the phases coexisting at the triple point obey lever rules in the specific entropy-specific volume diagram, and the relative changes in the mass fractions present in each phase along reversible isochoric and adiabatic processes of a pure substance at the triple point are governed by the relative sizes of the segments of the triple-point line in the pressure-specific volume diagram and in the temperature-specific entropy diagram. Applications to the ordinary triple point of water and to the triple point of Al2SiO5 polymorphs are presented.

  9. European Scientific Notes. Volume 36, Number 2,

    DTIC Science & Technology

    1982-02-28

    colleagues at creases in process complexity and cost of the the University College of Swansea have con- product . So far, aluminum alloy, steel, and...associated with metal spray processing can stream of metal impinging on a disk rotating at impart to the solidified product . 3,000 to 5,000 rpm is...the point. Indeed, the pilot can simplicity, economy , stand-alone operability, often "fly the point" as the approach continues, portability, and

  10. Direct production of fractionated and upgraded hydrocarbon fuels from biomass

    DOEpatents

    Felix, Larry G.; Linck, Martin B.; Marker, Terry L.; Roberts, Michael J.

    2014-08-26

    Multistage processing of biomass to produce at least two separate fungible fuel streams, one dominated by gasoline boiling-point range liquids and the other by diesel boiling-point range liquids. The processing involves hydrotreating the biomass to produce a hydrotreatment product including a deoxygenated hydrocarbon product of gasoline and diesel boiling materials, followed by separating each of the gasoline and diesel boiling materials from the hydrotreatment product and each other.

  11. Highly Selective Membranes For The Separation Of Organic Vapors Using Super-Glassy Polymers

    DOEpatents

    Pinnau, Ingo; Lokhandwala, Kaaeid; Nguyen, Phuong; Segelke, Scott

    1997-11-18

    A process for separating hydrocarbon gases of low boiling point, particularly methane, ethane and ethylene, from nitrogen. The process is performed using a membrane made from a super-glassy material. The gases to be separated are mixed with a condensable gas, such as a C.sub.3+ hydrocarbon. In the presence of the condensable gas, improved selectivity for the low-boiling-point hydrocarbon gas over nitrogen is achieved.

  12. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  13. Differences in finger localisation performance of patients with finger agnosia.

    PubMed

    Anema, Helen A; Kessels, Roy P C; de Haan, Edward H F; Kappelle, L Jaap; Leijten, Frans S; van Zandvoort, Martine J E; Dijkerman, H Chris

    2008-09-17

    Several neuropsychological studies have suggested parallel processing of somatosensory input when localising a tactile stimulus on one's own by pointing towards it (body schema) and when localising this touched location by pointing to it on a map of a hand (body image). Usually these reports describe patients with impaired detection, but intact sensorimotor localisation. This study examined three patients with a lesion of the angular gyrus with intact somatosensory processing, but with selectively disturbed finger identification (finger agnosia). These patients performed normally when pointing towards the touched finger on their own hand but failed to indicate this finger on a drawing of a hand or to name it. Similar defects in the perception of other body parts were not observed. The findings provide converging evidence for the dissociation between body image and body schema and, more importantly, reveal for the first time that this distinction is also present in higher-order cognitive processes selectively for the fingers.

  14. The Application of Linear and Nonlinear Water Tanks Case Study in Teaching of Process Control

    NASA Astrophysics Data System (ADS)

    Li, Xiangshun; Li, Zhiang

    2018-02-01

    In the traditional process control teaching, the importance of passing knowledge is emphasized while the development of creative and practical abilities of students is ignored. Traditional teaching methods are not very helpful to breed a good engineer. Case teaching is a very useful way to improve students’ innovative and practical abilities. In the traditional case teaching, knowledge points are taught separately based on different examples or no examples, thus it is very hard to setup the whole knowledge structure. Though all the knowledge is learned, how to use the knowledge to solve engineering problems keeps challenging for students. In this paper, the linear and nonlinear tanks are taken as illustrative examples which involves several knowledge points of process control. The application method of each knowledge point is discussed in detail and simulated. I believe the case-based study will be helpful for students.

  15. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  16. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  17. Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation

    NASA Astrophysics Data System (ADS)

    Stephenson, Jerry L.; Kapraun, Chris

    Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.

  18. Variance change point detection for fractional Brownian motion based on the likelihood ratio test

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz

    2018-01-01

    Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.

  19. Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.

    2017-09-01

    The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16  points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.

  20. ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION

    PubMed Central

    SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.

    2015-01-01

    Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814

  1. 40 CFR 408.200 - Applicability; description of the Alaskan bottom fish processing subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Alaskan bottom fish processing subcategory. 408.200 Section 408.200 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Bottom Fish Processing Subcategory § 408.200 Applicability; description of the Alaskan bottom fish processing subcategory. The provisions of this subpart are applicable...

  2. 40 CFR 408.200 - Applicability; description of the Alaskan bottom fish processing subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Alaskan bottom fish processing subcategory. 408.200 Section 408.200 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Bottom Fish Processing Subcategory § 408.200 Applicability; description of the Alaskan bottom fish processing subcategory. The provisions of this subpart are applicable...

  3. 40 CFR 408.200 - Applicability; description of the Alaskan bottom fish processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Alaskan bottom fish processing subcategory. 408.200 Section 408.200 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Bottom Fish Processing Subcategory § 408.200 Applicability; description of the Alaskan bottom fish processing subcategory. The provisions of this subpart are applicable...

  4. 40 CFR 408.200 - Applicability; description of the Alaskan bottom fish processing subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Alaskan bottom fish processing subcategory. 408.200 Section 408.200 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Bottom Fish Processing Subcategory § 408.200 Applicability; description of the Alaskan bottom fish processing subcategory. The provisions of this subpart are applicable...

  5. 40 CFR 408.100 - Applicability; description of the remote Alaskan shrimp processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... remote Alaskan shrimp processing subcategory. 408.100 Section 408.100 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Remote Alaskan Shrimp Processing Subcategory § 408.100 Applicability; description of the remote Alaskan shrimp processing subcategory. The provisions of this subpart are applicable...

  6. 40 CFR 408.100 - Applicability; description of the remote Alaskan shrimp processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... remote Alaskan shrimp processing subcategory. 408.100 Section 408.100 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Remote Alaskan Shrimp Processing Subcategory § 408.100 Applicability; description of the remote Alaskan shrimp processing subcategory. The provisions of this subpart are applicable...

  7. 40 CFR 408.200 - Applicability; description of the Alaskan bottom fish processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Alaskan bottom fish processing subcategory. 408.200 Section 408.200 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Bottom Fish Processing Subcategory § 408.200 Applicability; description of the Alaskan bottom fish processing subcategory. The provisions of this subpart are applicable...

  8. Delivering cognitive processing therapy in a community health setting: The influence of Latino culture and community violence on posttraumatic cognitions.

    PubMed

    Marques, Luana; Eustis, Elizabeth H; Dixon, Louise; Valentine, Sarah E; Borba, Christina P C; Simon, Naomi; Kaysen, Debra; Wiltsey-Stirman, Shannon

    2016-01-01

    Despite the applicability of cognitive processing therapy (CPT) for posttraumatic stress disorder (PTSD) to addressing sequelae of a range of traumatic events, few studies have evaluated whether the treatment itself is applicable across diverse populations. The present study examined differences and similarities among non-Latino, Latino Spanish-speaking, and Latino English-speaking clients in rigid beliefs-or "stuck points"-associated with PTSD symptoms in a sample of community mental health clients. We utilized the procedures of content analysis to analyze stuck point logs and impact statements of 29 participants enrolled in a larger implementation trial for CPT. Findings indicated that the content of stuck points was similar across Latino and non-Latino clients, although fewer total stuck points were identified for Latino clients compared to non-Latino clients. Given that identification of stuck points is central to implementing CPT, difficulty identifying stuck points could pose significant challenges for implementing CPT among Latino clients and warrants further examination. Thematic analysis of impact statements revealed the importance of family, religion, and the urban context (e.g., poverty, violence exposure) in understanding how clients organize beliefs and emotions associated with trauma. Clinical recommendations for implementing CPT in community settings and the identification of stuck points are provided. (c) 2016 APA, all rights reserved).

  9. High-Dimensional Bayesian Geostatistics

    PubMed Central

    Banerjee, Sudipto

    2017-01-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920

  10. Fickian dispersion is anomalous

    DOE PAGES

    Cushman, John H.; O’Malley, Dan

    2015-06-22

    The thesis put forward here is that the occurrence of Fickian dispersion in geophysical settings is a rare event and consequently should be labeled as anomalous. What people classically call anomalous is really the norm. In a Lagrangian setting, a process with mean square displacement which is proportional to time is generally labeled as Fickian dispersion. With a number of counter examples we show why this definition is fraught with difficulty. In a related discussion, we show an infinite second moment does not necessarily imply the process is super dispersive. By employing a rigorous mathematical definition of Fickian dispersion wemore » illustrate why it is so hard to find a Fickian process. We go on to employ a number of renormalization group approaches to classify non-Fickian dispersive behavior. Scaling laws for the probability density function for a dispersive process, the distribution for the first passage times, the mean first passage time, and the finite-size Lyapunov exponent are presented for fixed points of both deterministic and stochastic renormalization group operators. The fixed points of the renormalization group operators are p-self-similar processes. A generalized renormalization group operator is introduced whose fixed points form a set of generalized self-similar processes. Finally, power-law clocks are introduced to examine multi-scaling behavior. Several examples of these ideas are presented and discussed.« less

  11. Thermal processing of a poorly water-soluble drug substance exhibiting a high melting point: the utility of KinetiSol® Dispersing.

    PubMed

    Hughey, Justin R; Keen, Justin M; Brough, Chris; Saeger, Sophie; McGinity, James W

    2011-10-31

    Poorly water-soluble drug substances that exhibit high melting points are often difficult to successfully process by fusion-based techniques. The purpose of this study was to identify a suitable polymer system for meloxicam (MLX), a high melting point class II BCS compound, and investigate thermal processing techniques for the preparation of chemically stable single phase solid dispersions. Thermal and solution based screening techniques were utilized to screen hydrophilic polymers suitable for immediate release formulations. Results of the screening studies demonstrated that Soluplus(®)(SOL) provided the highest degree of miscibility and solubility enhancement. A hot-melt extrusion feasibility study demonstrated that high temperatures and extended residence times were required in order to render compositions amorphous, causing significant degradation of MLX. A design of experiments (DOE) was conducted on the KinetiSol(®) Dispersing (KSD) process to evaluate the effect of processing conditions on the chemical stability and amorphous character of MLX. The study demonstrated that ejection temperature significantly impacted MLX stability. All samples prepared by KSD were substantially amorphous. Dissolution analysis of the KSD processed solid dispersions showed increased dissolution rates and extent of supersaturation over the marketed generic MLX tablets. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. High-Dimensional Bayesian Geostatistics.

    PubMed

    Banerjee, Sudipto

    2017-06-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.

  13. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  14. Universal statistics of terminal dynamics before collapse

    NASA Astrophysics Data System (ADS)

    Lenner, Nicolas; Eule, Stephan; Wolf, Fred

    Recent biological developments have both drastically increased the precision as well as amount of generated data, allowing for a switching from pure mean value characterization of the process under consideration to an analysis of the whole ensemble, exploiting the stochastic nature of biology. We focus on the general class of non-equilibrium processes with distinguished terminal points as can be found in cell fate decision, check points or cognitive neuroscience. Aligning the data to a terminal point (e.g. represented as an absorbing boundary) allows to device a general methodology to characterize and reverse engineer the terminating history. Using a small noise approximation we derive mean variance and covariance of the aligned data for general finite time singularities.

  15. 40 CFR 424.30 - Applicability; description of the slag processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... processing subcategory. 424.30 Section 424.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Slag Processing Subcategory § 424.30 Applicability; description of the slag processing subcategory. The provisions of this...

  16. About plasma points' generation in Z-pinch

    NASA Astrophysics Data System (ADS)

    Afonin, V. I.; Potapov, A. V.; Lazarchuk, V. P.; Murugov, V. M.; Senik, A. V.

    1997-05-01

    The streak tube study results (at visible and x-ray ranges) of dynamics of fast Z-pinch formed at explosion of metal wire in diode of high current generator are presented. Amplitude of current in the load reached ˜180 kA at increase time ˜50 ns. The results' analysis points to capability of controlling hot plasma points generation process in Z-pinch.

  17. The implement of Talmud property allocation algorithm based on graphic point-segment way

    NASA Astrophysics Data System (ADS)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  18. 40 CFR 414.90 - Applicability; description of the subcategory of direct discharge point sources that use end-of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Use End-of-Pipe... subcategory of direct discharge point sources that use end-of-pipe biological treatment. 414.90 Section 414.90... that use end-of-pipe biological treatment. The provisions of this subpart are applicable to the process...

  19. Seeing in three dimensions: correlation and triangulation of Mars Exploration Rover imagery

    NASA Technical Reports Server (NTRS)

    Deen, Robert; Lorre, Jean

    2005-01-01

    This paper describes in detail the middle parts of the ground-based terrain derivation process: correlation, which finds matching points in the stereo pair, and triangulation, which converts those points to XYZ coordinates.

  20. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    NASA Astrophysics Data System (ADS)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.

  1. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.

  2. Food safety and nutritional quality for the prevention of non communicable diseases: the Nutrient, hazard Analysis and Critical Control Point process (NACCP).

    PubMed

    Di Renzo, Laura; Colica, Carmen; Carraro, Alberto; Cenci Goga, Beniamino; Marsella, Luigi Tonino; Botta, Roberto; Colombo, Maria Laura; Gratteri, Santo; Chang, Ting Fa Margherita; Droli, Maurizio; Sarlo, Francesca; De Lorenzo, Antonino

    2015-04-23

    The important role of food and nutrition in public health is being increasingly recognized as crucial for its potential impact on health-related quality of life and the economy, both at the societal and individual levels. The prevalence of non-communicable diseases calls for a reformulation of our view of food. The Hazard Analysis and Critical Control Point (HACCP) system, first implemented in the EU with the Directive 43/93/CEE, later replaced by Regulation CE 178/2002 and Regulation CE 852/2004, is the internationally agreed approach for food safety control. Our aim is to develop a new procedure for the assessment of the Nutrient, hazard Analysis and Critical Control Point (NACCP) process, for total quality management (TMQ), and optimize nutritional levels. NACCP was based on four general principles: i) guarantee of health maintenance; ii) evaluate and assure the nutritional quality of food and TMQ; iii) give correct information to the consumers; iv) ensure an ethical profit. There are three stages for the application of the NACCP process: 1) application of NACCP for quality principles; 2) application of NACCP for health principals; 3) implementation of the NACCP process. The actions are: 1) identification of nutritional markers, which must remain intact throughout the food supply chain; 2) identification of critical control points which must monitored in order to minimize the likelihood of a reduction in quality; 3) establishment of critical limits to maintain adequate levels of nutrient; 4) establishment, and implementation of effective monitoring procedures of critical control points; 5) establishment of corrective actions; 6) identification of metabolic biomarkers; 7) evaluation of the effects of food intake, through the application of specific clinical trials; 8) establishment of procedures for consumer information; 9) implementation of the Health claim Regulation EU 1924/2006; 10) starting a training program. We calculate the risk assessment as follows: Risk (R) = probability (P) × damage (D). The NACCP process considers the entire food supply chain "from farm to consumer"; in each point of the chain it is necessary implement a tight monitoring in order to guarantee optimal nutritional quality.

  3. Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.

    2017-05-01

    In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.

  4. Ka-band monopulse antenna-pointing systems analysis and simulation

    NASA Technical Reports Server (NTRS)

    Lo, V. Y.

    1996-01-01

    NASA 's Deep Space Network (DSN) has been using both 70-m and 34-m reflector antennas to communicate with spacecraft at S-band (2.3 GHz) and X-band (8.45 GHz). To improve the quality of telecommunication and to meet future mission requirements, JPL has been developing 34-m Ka-band (32-GHz) beam waveguide antennas. Presently, antenna pointing operates in either the open-loop mode with blind pointing using navigation predicts or the closed-loop mode with conical scan (conscan). Pointing accuracy under normal conscan operating conditions is in the neighborhood of 5 mdeg. This is acceptable at S- and X-bands, but not enough at Ka-band. Due to the narrow beamwidth at Ka-band, it is important to improve pointing accuracy significantly (approximately 2 mdeg). Monopulse antenna tracking is one scheme being developed to meet the stringent pointing-accuracy requirement at Ka-band. Other advantages of monopulse tracking include low sensitivity to signal amplitude fluctuations as well as single-pulse processing for acquisition and tracking. This article presents system modeling, signal processing, simulation, and implementation of Ka-band monopulse tracking feed for antennas in NASA/DSN ground stations.

  5. Ephemeral active regions and coronal bright points: A solar maximum Mission 2 guest investigator study

    NASA Technical Reports Server (NTRS)

    Harvey, K. L.; Tang, F. Y. C.; Gaizauskas, V.; Poland, A. I.

    1986-01-01

    A dominate association of coronal bright points (as seen in He wavelength 10830) was confirmed with the approach and subsequent disappearance of opposite polarity magnetic network. While coronal bright points do occur with ephemeral regions, this association is a factor of 2 to 4 less than with sites of disappearing magnetic flux. The intensity variations seen in He I wavelength 10830 are intermittent and often rapid, varying over the 3 minute time resolution of the data; their bright point counterparts in the C IV wavelength 1548 and 20 cm wavelength show similar, though not always coincident time variations. Ejecta are associated with about 1/3 of the dark points and are evident in the C IV and H alpha data. These results support the idea that the anti-correlation of X-ray bright points with the solar cycle can be explained by the correlation of these coronal emission structures with sites of cancelling flux, indicating that, in some cases, the process of magnetic flux removal results in the release of energy. That the intensity variations are rapid and variable suggests that this process works intermittently.

  6. Feature-based attention to unconscious shapes and colors.

    PubMed

    Schmidt, Filipp; Schmidt, Thomas

    2010-08-01

    Two experiments employed feature-based attention to modulate the impact of completely masked primes on subsequent pointing responses. Participants processed a color cue to select a pair of possible pointing targets out of multiple targets on the basis of their color, and then pointed to the one of those two targets with a prespecified shape. All target pairs were preceded by prime pairs triggering either the correct or the opposite response. The time interval between cue and primes was varied to modulate the time course of feature-based attentional selection. In a second experiment, the roles of color and shape were switched. Pointing trajectories showed large priming effects that were amplified by feature-based attention, indicating that attention modulated the earliest phases of motor output. Priming effects as well as their attentional modulation occurred even though participants remained unable to identify the primes, indicating distinct processes underlying visual awareness, attention, and response control.

  7. [Risk management--a new aspect of quality assessment in intensive care medicine: first results of an analysis of the DIVI's interdisciplinary quality assessment research group].

    PubMed

    Stiletto, R; Röthke, M; Schäfer, E; Lefering, R; Waydhas, Ch

    2006-10-01

    Patient security has become one of the major aspects of clinical management in recent years. The crucial point in research was focused on malpractice. In contradiction to the economic process in non medical fields, the analysis of errors during the in-patient treatment time was neglected. Patient risk management can be defined as a structured procedure in a clinical unit with the aim to reduce harmful events. A risk point model was created based on a Delphi process and founded on the DIVI data register. The risk point model was evaluated in clinically working ICU departments participating in the register data base. The results of the risk point evaluation will be integrated in the next data base update. This might be a step to improve the reliability of the register to measure quality assessment in the ICU.

  8. Accuracy of heart rate variability estimation by photoplethysmography using an smartphone: Processing optimization and fiducial point selection.

    PubMed

    Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A

    2015-08-01

    This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.

  9. Using multifield measurements to eliminate alignment degeneracies in the JWST testbed telescope

    NASA Astrophysics Data System (ADS)

    Sabatke, Erin; Acton, Scott; Schwenker, John; Towell, Tim; Carey, Larkin; Shields, Duncan; Contos, Adam; Leviton, Doug

    2007-09-01

    The primary mirror of the James Webb Space Telescope (JWST) consists of 18 segments and is 6.6 meters in diameter. A sequence of commissioning steps is carried out at a single field point to align the segments. At that single field point, though, the segmented primary mirror can compensate for aberrations caused by misalignments of the remaining mirrors. The misalignments can be detected in the wavefronts of off-axis field points. The Multifield (MF) step in the commissioning process surveys five field points and uses a simple matrix multiplication to calculate corrected positions for the secondary and primary mirrors. A demonstration of the Multifield process was carried out on the JWST Testbed Telescope (TBT). The results show that the Multifield algorithm is capable of reducing the field dependency of the TBT to about 20 nm RMS, relative to the TBT design nominal field dependency.

  10. Examining the Process of Responding to Circumplex Scales of Interpersonal Values Items: Should Ideal Point Scoring Methods Be Considered?

    PubMed

    Ling, Ying; Zhang, Minqiang; Locke, Kenneth D; Li, Guangming; Li, Zonglong

    2016-01-01

    The Circumplex Scales of Interpersonal Values (CSIV) is a 64-item self-report measure of goals from each octant of the interpersonal circumplex. We used item response theory methods to compare whether dominance models or ideal point models best described how people respond to CSIV items. Specifically, we fit a polytomous dominance model called the generalized partial credit model and an ideal point model of similar complexity called the generalized graded unfolding model to the responses of 1,893 college students. The results of both graphical comparisons of item characteristic curves and statistical comparisons of model fit suggested that an ideal point model best describes the process of responding to CSIV items. The different models produced different rank orderings of high-scoring respondents, but overall the models did not differ in their prediction of criterion variables (agentic and communal interpersonal traits and implicit motives).

  11. Friction surfaced Stellite6 coatings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, K. Prasad; Damodaram, R.; Rafi, H. Khalid, E-mail: khalidrafi@gmail.com

    2012-08-15

    Solid state Stellite6 coatings were deposited on steel substrate by friction surfacing and compared with Stellite6 cast rod and coatings deposited by gas tungsten arc and plasma transferred arc welding processes. Friction surfaced coatings exhibited finer and uniformly distributed carbides and were characterized by the absence of solidification structure and compositional homogeneity compared to cast rod, gas tungsten arc and plasma transferred coatings. Friction surfaced coating showed relatively higher hardness. X-ray diffraction of samples showed only face centered cubic Co peaks while cold worked coating showed hexagonally close packed Co also. - Highlights: Black-Right-Pointing-Pointer Stellite6 used as coating material formore » friction surfacing. Black-Right-Pointing-Pointer Friction surfaced (FS) coatings compared with casting, GTA and PTA processes. Black-Right-Pointing-Pointer Finer and uniformly distributed carbides in friction surfaced coatings. Black-Right-Pointing-Pointer Absence of melting results compositional homogeneity in FS Stellite6 coatings.« less

  12. 40 CFR 408.250 - Applicability; description of the Pacific Coast hand-shucked oyster processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Pacific Coast hand-shucked oyster processing subcategory. 408.250 Section 408.250 Protection of... SEAFOOD PROCESSING POINT SOURCE CATEGORY Pacific Coast Hand-Shucked Oyster Processing Subcategory § 408.250 Applicability; description of the Pacific Coast hand-shucked oyster processing subcategory. The...

  13. 40 CFR 408.270 - Applicability; description of the steamed and canned oyster processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... steamed and canned oyster processing subcategory. 408.270 Section 408.270 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Steamed and Canned Oyster Processing Subcategory § 408.270 Applicability; description of the steamed and canned oyster processing subcategory. The provisions of this subpart are...

  14. 40 CFR 408.210 - Applicability; description of the non-Alaskan conventional bottom fish processing subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-Alaskan conventional bottom fish processing subcategory. 408.210 Section 408.210 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Conventional Bottom Fish Processing Subcategory § 408.210 Applicability; description of the non-Alaskan conventional bottom fish processing subcategory. The provisions of...

  15. 40 CFR 408.210 - Applicability; description of the non-Alaskan conventional bottom fish processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-Alaskan conventional bottom fish processing subcategory. 408.210 Section 408.210 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Conventional Bottom Fish Processing Subcategory § 408.210 Applicability; description of the non-Alaskan conventional bottom fish processing subcategory. The provisions of...

  16. 40 CFR 408.220 - Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-Alaskan mechanized bottom fish processing subcategory. 408.220 Section 408.220 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Mechanized Bottom Fish Processing Subcategory § 408.220 Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory. The provisions of...

  17. 40 CFR 408.210 - Applicability; description of the non-Alaskan conventional bottom fish processing subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-Alaskan conventional bottom fish processing subcategory. 408.210 Section 408.210 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Conventional Bottom Fish Processing Subcategory § 408.210 Applicability; description of the non-Alaskan conventional bottom fish processing subcategory. The provisions of...

  18. 40 CFR 408.210 - Applicability; description of the non-Alaskan conventional bottom fish processing subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-Alaskan conventional bottom fish processing subcategory. 408.210 Section 408.210 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Conventional Bottom Fish Processing Subcategory § 408.210 Applicability; description of the non-Alaskan conventional bottom fish processing subcategory. The provisions of...

  19. 40 CFR 408.220 - Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-Alaskan mechanized bottom fish processing subcategory. 408.220 Section 408.220 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Mechanized Bottom Fish Processing Subcategory § 408.220 Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory. The provisions of...

  20. 40 CFR 408.220 - Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-Alaskan mechanized bottom fish processing subcategory. 408.220 Section 408.220 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Mechanized Bottom Fish Processing Subcategory § 408.220 Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory. The provisions of...

  1. 40 CFR 408.220 - Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-Alaskan mechanized bottom fish processing subcategory. 408.220 Section 408.220 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Mechanized Bottom Fish Processing Subcategory § 408.220 Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory. The provisions of...

  2. 40 CFR 408.90 - Applicability; description of the non-remote Alaskan shrimp processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-remote Alaskan shrimp processing subcategory. 408.90 Section 408.90 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Remote Alaskan Shrimp Processing Subcategory § 408.90 Applicability; description of the non-remote Alaskan shrimp processing subcategory. The provisions of this subpart are...

  3. 40 CFR 408.130 - Applicability; description of the breaded shrimp processing in the contiguous States subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... breaded shrimp processing in the contiguous States subcategory. 408.130 Section 408.130 Protection of... SEAFOOD PROCESSING POINT SOURCE CATEGORY Breaded Shrimp Processing in the Contiguous States Subcategory § 408.130 Applicability; description of the breaded shrimp processing in the contiguous States...

  4. 40 CFR 408.130 - Applicability; description of the breaded shrimp processing in the contiguous States subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... breaded shrimp processing in the contiguous States subcategory. 408.130 Section 408.130 Protection of... SEAFOOD PROCESSING POINT SOURCE CATEGORY Breaded Shrimp Processing in the Contiguous States Subcategory § 408.130 Applicability; description of the breaded shrimp processing in the contiguous States...

  5. 40 CFR 408.110 - Applicability; description of the Northern shrimp processing in the contiguous States subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Northern shrimp processing in the contiguous States subcategory. 408.110 Section 408.110 Protection of... SEAFOOD PROCESSING POINT SOURCE CATEGORY Northern Shrimp Processing in the Contiguous States Subcategory § 408.110 Applicability; description of the Northern shrimp processing in the contiguous States...

  6. 40 CFR 408.110 - Applicability; description of the Northern shrimp processing in the contiguous States subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Northern shrimp processing in the contiguous States subcategory. 408.110 Section 408.110 Protection of... SEAFOOD PROCESSING POINT SOURCE CATEGORY Northern Shrimp Processing in the Contiguous States Subcategory § 408.110 Applicability; description of the Northern shrimp processing in the contiguous States...

  7. 40 CFR 408.90 - Applicability; description of the non-remote Alaskan shrimp processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-remote Alaskan shrimp processing subcategory. 408.90 Section 408.90 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Remote Alaskan Shrimp Processing Subcategory § 408.90 Applicability; description of the non-remote Alaskan shrimp processing subcategory. The provisions of this subpart are...

  8. 40 CFR 408.220 - Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-Alaskan mechanized bottom fish processing subcategory. 408.220 Section 408.220 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Mechanized Bottom Fish Processing Subcategory § 408.220 Applicability; description of the non-Alaskan mechanized bottom fish processing subcategory. The provisions of...

  9. 40 CFR 408.210 - Applicability; description of the non-Alaskan conventional bottom fish processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-Alaskan conventional bottom fish processing subcategory. 408.210 Section 408.210 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Alaskan Conventional Bottom Fish Processing Subcategory § 408.210 Applicability; description of the non-Alaskan conventional bottom fish processing subcategory. The provisions of...

  10. 40 CFR 408.50 - Applicability; description of the remote Alaskan crab meat processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... remote Alaskan crab meat processing subcategory. 408.50 Section 408.50 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Remote Alaskan Crab Meat Processing Subcategory § 408.50 Applicability; description of the remote Alaskan crab meat processing subcategory. The provisions of this subpart are...

  11. 40 CFR 408.50 - Applicability; description of the remote Alaskan crab meat processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... remote Alaskan crab meat processing subcategory. 408.50 Section 408.50 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Remote Alaskan Crab Meat Processing Subcategory § 408.50 Applicability; description of the remote Alaskan crab meat processing subcategory. The provisions of this subpart are...

  12. 40 CFR 408.190 - Applicability; description of the West Coast mechanized salmon processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Coast mechanized salmon processing subcategory. 408.190 Section 408.190 Protection of Environment... PROCESSING POINT SOURCE CATEGORY West Coast Mechanized Salmon Processing Subcategory § 408.190 Applicability; description of the West Coast mechanized salmon processing subcategory. The provisions of this subpart are...

  13. 40 CFR 408.160 - Applicability; description of the Alaskan hand-butchered salmon processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Alaskan hand-butchered salmon processing subcategory. 408.160 Section 408.160 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Hand-Butchered Salmon Processing Subcategory § 408.160 Applicability; description of the Alaskan hand-butchered salmon processing subcategory. The provisions of this subpart are...

  14. 40 CFR 408.180 - Applicability; description of the West Coast hand-butchered salmon processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Coast hand-butchered salmon processing subcategory. 408.180 Section 408.180 Protection of Environment... PROCESSING POINT SOURCE CATEGORY West Coast Hand-Butchered Salmon Processing Subcategory § 408.180 Applicability; description of the West Coast hand-butchered salmon processing subcategory. The provisions of...

  15. 40 CFR 408.180 - Applicability; description of the West Coast hand-butchered salmon processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Coast hand-butchered salmon processing subcategory. 408.180 Section 408.180 Protection of Environment... PROCESSING POINT SOURCE CATEGORY West Coast Hand-Butchered Salmon Processing Subcategory § 408.180 Applicability; description of the West Coast hand-butchered salmon processing subcategory. The provisions of...

  16. 40 CFR 408.190 - Applicability; description of the West Coast mechanized salmon processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Coast mechanized salmon processing subcategory. 408.190 Section 408.190 Protection of Environment... PROCESSING POINT SOURCE CATEGORY West Coast Mechanized Salmon Processing Subcategory § 408.190 Applicability; description of the West Coast mechanized salmon processing subcategory. The provisions of this subpart are...

  17. 40 CFR 408.160 - Applicability; description of the Alaskan hand-butchered salmon processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Alaskan hand-butchered salmon processing subcategory. 408.160 Section 408.160 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Alaskan Hand-Butchered Salmon Processing Subcategory § 408.160 Applicability; description of the Alaskan hand-butchered salmon processing subcategory. The provisions of this subpart are...

  18. Implementation of Steiner point of fuzzy set.

    PubMed

    Liang, Jiuzhen; Wang, Dejiang

    2014-01-01

    This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.

  19. A Novel Real-Time Reference Key Frame Scan Matching Method

    PubMed Central

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-01-01

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285

  20. Modeling of the reburning process using sewage sludge-derived syngas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werle, Sebastian, E-mail: sebastian.werle@polsl.pl

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Gasification provides an attractive method for sewage sludges treatment. Black-Right-Pointing-Pointer Gasification generates a fuel gas (syngas) which can be used as a reburning fuel. Black-Right-Pointing-Pointer Reburning potential of sewage sludge gasification gases was defined. Black-Right-Pointing-Pointer Numerical simulation of co-combustion of syngases in coal fired boiler has been done. Black-Right-Pointing-Pointer Calculation shows that analysed syngases can provide higher than 80% reduction of NO{sub x}. - Abstract: Gasification of sewage sludge can provide clean and effective reburning fuel for combustion applications. The motivation of this work was to define the reburning potential of the sewage sludge gasification gas (syngas). Amore » numerical simulation of the co-combustion process of syngas in a hard coal-fired boiler was done. All calculations were performed using the Chemkin programme and a plug-flow reactor model was used. The calculations were modelled using the GRI-Mech 2.11 mechanism. The highest conversions for nitric oxide (NO) were obtained at temperatures of approximately 1000-1200 K. The combustion of hard coal with sewage sludge-derived syngas reduces NO emissions. The highest reduction efficiency (>90%) was achieved when the molar flow ratio of the syngas was 15%. Calculations show that the analysed syngas can provide better results than advanced reburning (connected with ammonia injection), which is more complicated process.« less

  1. A COMPARISON OF TRANSIENT INFINITE ELEMENTS AND TRANSIENT KIRCHHOFF INTEGRAL METHODS FOR FAR FIELD ACOUSTIC ANALYSIS

    DOE PAGES

    WALSH, TIMOTHY F.; JONES, ANDREA; BHARDWAJ, MANOJ; ...

    2013-04-01

    Finite element analysis of transient acoustic phenomena on unbounded exterior domains is very common in engineering analysis. In these problems there is a common need to compute the acoustic pressure at points outside of the acoustic mesh, since meshing to points of interest is impractical in many scenarios. In aeroacoustic calculations, for example, the acoustic pressure may be required at tens or hundreds of meters from the structure. In these cases, a method is needed for post-processing the acoustic results to compute the response at far-field points. In this paper, we compare two methods for computing far-field acoustic pressures, onemore » derived directly from the infinite element solution, and the other from the transient version of the Kirchhoff integral. Here, we show that the infinite element approach alleviates the large storage requirements that are typical of Kirchhoff integral and related procedures, and also does not suffer from loss of accuracy that is an inherent part of computing numerical derivatives in the Kirchhoff integral. In order to further speed up and streamline the process of computing the acoustic response at points outside of the mesh, we also address the nonlinear iterative procedure needed for locating parametric coordinates within the host infinite element of far-field points, the parallelization of the overall process, linear solver requirements, and system stability considerations.« less

  2. Image monitoring of pharmaceutical blending processes and the determination of an end point by using a portable near-infrared imaging device based on a polychromator-type near-infrared spectrometer with a high-speed and high-resolution photo diode array detector.

    PubMed

    Murayama, Kodai; Ishikawa, Daitaro; Genkawa, Takuma; Sugino, Hiroyuki; Komiyama, Makoto; Ozaki, Yukihiro

    2015-03-03

    In the present study we have developed a new version (ND-NIRs) of a polychromator-type near-infrared (NIR) spectrometer with a high-resolution photo diode array detector, which we built before (D-NIRs). The new version has four 5 W halogen lamps compared with the three lamps for the older version. The new version also has a condenser lens with a shorter focal point length. The increase in the number of the lamps and the shortening of the focal point of the condenser lens realize high signal-to-noise ratio and high-speed NIR imaging measurement. By using the ND-NIRs we carried out the in-line monitoring of pharmaceutical blending and determined an end point of the blending process. Moreover, to determinate a more accurate end point, a NIR image of the blending sample was acquired by means of a portable NIR imaging device based on ND-NIRs. The imaging result has demonstrated that the mixing time of 8 min is enough for homogeneous mixing. In this way the present study has demonstrated that ND-NIRs and the imaging system based on a ND-NIRs hold considerable promise for process analysis.

  3. [Research on whole blending end-point evaluation method of Angong Niuhuang Wan based on QbD concept].

    PubMed

    Liu, Xiao-Na; Zheng, Qiu-Sheng; Che, Xiao-Qing; Wu, Zhi-Sheng; Qiao, Yan-Jiang

    2017-03-01

    The blending end-point determination of Angong Niuhuang Wan (AGNH) is a key technology problem. The control strategy based on quality by design (QbD) concept proposes a whole blending end-point determination method, and provides a methodology for blending the Chinese materia medica containing mineral substances. Based on QbD concept, the laser induced breakdown spectroscopy (LIBS) was used to assess the cinnabar, realgar and pearl powder blending of AGNH in a pilot-scale experiment, especially the whole blending end-point in this study. The blending variability of three mineral medicines including cinnabar, realgar and pearl powder, was measured by moving window relative standard deviation (MWRSD) based on LIBS. The time profiles of realgar and pearl powder did not produce consistent results completely, but all of them reached even blending at the last blending stage, so that the whole proposal blending end point was determined. LIBS is a promising Process Analytical Technology (PAT) for process control. Unlike other elemental determination technologies such ICP-OES, LIBS does not need an elaborate digestion procedure, which is a promising and rapid technique to understand the blending process of Chinese materia medica (CMM) containing cinnabar, realgar and other mineral traditional Chinese medicine. This study proposed a novel method for the research of large varieties of traditional Chinese medicines.. Copyright© by the Chinese Pharmaceutical Association.

  4. Performance Analysis of Entropy Methods on K Means in Clustering Process

    NASA Astrophysics Data System (ADS)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  5. The effect of social marketing communication on safe driving.

    PubMed

    Yang, Dong-Jenn; Lin, Wan-Chen; Lo, Jyue-Yu

    2011-12-01

    Processing of cognition, affect, and intention was investigated in viewers of advertisements to prevent speeding while driving. Results indicated that anchoring-point messages had greater effects on viewers' cognition, attitude, and behavioral intention than did messages without anchoring points. Further, the changes in message anchoring points altered participants' perceptions of acceptable and unacceptable judgments: a higher anchoring point in the form of speeding mortality was more persuasive in promoting the idea of reducing driving speed. Implications for creation of effective safe driving communications are discussed.

  6. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  7. Self-Exciting Point Process Models of Civilian Deaths in Iraq

    DTIC Science & Technology

    2010-01-01

    Tita , 2009), we propose that violence in Iraq arises from a combination of exogenous and en- dogenous effects. Spatial heterogeneity in background...Schoenberg, and Tita (2010) where they analyze burgarly and robbery data in Los Angeles. Related work has also been done 2 in Short et al. (2009) where...Control , 4 , 215–240. Mohler, G. O., Short, M. B., Brantingham, P. J., Schoenberg, F. P., & Tita , G. E. (2010). Self- exciting point process modeling of

  8. Small-Scale Surf Zone Geometric Roughness

    DTIC Science & Technology

    2017-12-01

    and an image of the tie points can be seen (Figure 6). 23 Figure 6. Screen Shot of Alignment Process On the left side is the workspace which...rest of the points, producing the 3D surface. 24 Figure 7. Screen Shot of Dense Cloud Process On the left side is the workspace which...maximum 200 words) Measurements of small-scale (O(mm)) geometric roughness (kf) associated with breaking wave foam were obtained within the surf zone on

  9. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  10. Application of Bayesian techniques to model the burden of human salmonellosis attributable to U.S. food commodities at the point of processing: adaptation of a Danish model.

    PubMed

    Guo, Chuanfa; Hoekstra, Robert M; Schroeder, Carl M; Pires, Sara Monteiro; Ong, Kanyin Liane; Hartnett, Emma; Naugle, Alecia; Harman, Jane; Bennett, Patricia; Cieslak, Paul; Scallan, Elaine; Rose, Bonnie; Holt, Kristin G; Kissler, Bonnie; Mbandi, Evelyne; Roodsari, Reza; Angulo, Frederick J; Cole, Dana

    2011-04-01

    Mathematical models that estimate the proportion of foodborne illnesses attributable to food commodities at specific points in the food chain may be useful to risk managers and policy makers to formulate public health goals, prioritize interventions, and document the effectiveness of mitigations aimed at reducing illness. Using human surveillance data on laboratory-confirmed Salmonella infections from the Centers for Disease Control and Prevention and Salmonella testing data from U.S. Department of Agriculture Food Safety and Inspection Service's regulatory programs, we developed a point-of-processing foodborne illness attribution model by adapting the Hald Salmonella Bayesian source attribution model. Key model outputs include estimates of the relative proportions of domestically acquired sporadic human Salmonella infections resulting from contamination of raw meat, poultry, and egg products processed in the United States from 1998 through 2003. The current model estimates the relative contribution of chicken (48%), ground beef (28%), turkey (17%), egg products (6%), intact beef (1%), and pork (<1%) across 109 Salmonella serotypes found in food commodities at point of processing. While interpretation of the attribution estimates is constrained by data inputs, the adapted model shows promise and may serve as a basis for a common approach to attribution of human salmonellosis and food safety decision-making in more than one country. © Mary Ann Liebert, Inc.

  11. Acute Alcohol Consumption Impairs Controlled but Not Automatic Processes in a Psychophysical Pointing Paradigm

    PubMed Central

    Johnston, Kevin; Timney, Brian; Goodale, Melvyn A.

    2013-01-01

    Numerous studies have investigated the effects of alcohol consumption on controlled and automatic cognitive processes. Such studies have shown that alcohol impairs performance on tasks requiring conscious, intentional control, while leaving automatic performance relatively intact. Here, we sought to extend these findings to aspects of visuomotor control by investigating the effects of alcohol in a visuomotor pointing paradigm that allowed us to separate the influence of controlled and automatic processes. Six male participants were assigned to an experimental “correction” condition in which they were instructed to point at a visual target as quickly and accurately as possible. On a small percentage of trials, the target “jumped” to a new location. On these trials, the participants’ task was to amend their movement such that they pointed to the new target location. A second group of 6 participants were assigned to a “countermanding” condition, in which they were instructed to terminate their movements upon detection of target “jumps”. In both the correction and countermanding conditions, participants served as their own controls, taking part in alcohol and no-alcohol conditions on separate days. Alcohol had no effect on participants’ ability to correct movements “in flight”, but impaired the ability to withhold such automatic corrections. Our data support the notion that alcohol selectively impairs controlled processes in the visuomotor domain. PMID:23861934

  12. Parameter optimization and stretch enhancement of AISI 316 sheet using rapid prototyping technique

    NASA Astrophysics Data System (ADS)

    Moayedfar, M.; Rani, A. M.; Hanaei, H.; Ahmad, A.; Tale, A.

    2017-10-01

    Incremental sheet forming is a flexible manufacturing process which uses the indenter point-to-point force to shape the sheet metal workpiece into manufactured parts in batch production series. However, the problem sometimes arising from this process is the low plastic point in the stress-strain diagram of the material which leads the low stretching amount before ultra-tensile strain point. Hence, a set of experiments is designed to find the optimum forming parameters in this process for optimum sheet thickness distribution while both sides of the sheet are considered for the surface quality improvement. A five-axis high-speed CNC milling machine is employed to deliver the proper motion based on the programming system while the clamping system for holding the sheet metal was a blank mould. Finally, an electron microscope and roughness machine are utilized to evaluate the surface structure of final parts, illustrate any defect may cause during the forming process and examine the roughness of the final part surface accordingly. The best interaction between parameters is obtained with the optimum values which lead the maximum sheet thickness distribution of 4.211e-01 logarithmic elongation when the depth was 24mm with respect to the design. This study demonstrates that this rapid forming method offers an alternative solution for surface quality improvement of 65% avoiding the low probability of cracks and low probability of crystal structure changes.

  13. Application of Bayesian Techniques to Model the Burden of Human Salmonellosis Attributable to U.S. Food Commodities at the Point of Processing: Adaptation of a Danish Model

    PubMed Central

    Guo, Chuanfa; Hoekstra, Robert M.; Schroeder, Carl M.; Pires, Sara Monteiro; Ong, Kanyin Liane; Hartnett, Emma; Naugle, Alecia; Harman, Jane; Bennett, Patricia; Cieslak, Paul; Scallan, Elaine; Rose, Bonnie; Holt, Kristin G.; Kissler, Bonnie; Mbandi, Evelyne; Roodsari, Reza; Angulo, Frederick J.

    2011-01-01

    Abstract Mathematical models that estimate the proportion of foodborne illnesses attributable to food commodities at specific points in the food chain may be useful to risk managers and policy makers to formulate public health goals, prioritize interventions, and document the effectiveness of mitigations aimed at reducing illness. Using human surveillance data on laboratory-confirmed Salmonella infections from the Centers for Disease Control and Prevention and Salmonella testing data from U.S. Department of Agriculture Food Safety and Inspection Service's regulatory programs, we developed a point-of-processing foodborne illness attribution model by adapting the Hald Salmonella Bayesian source attribution model. Key model outputs include estimates of the relative proportions of domestically acquired sporadic human Salmonella infections resulting from contamination of raw meat, poultry, and egg products processed in the United States from 1998 through 2003. The current model estimates the relative contribution of chicken (48%), ground beef (28%), turkey (17%), egg products (6%), intact beef (1%), and pork (<1%) across 109 Salmonella serotypes found in food commodities at point of processing. While interpretation of the attribution estimates is constrained by data inputs, the adapted model shows promise and may serve as a basis for a common approach to attribution of human salmonellosis and food safety decision-making in more than one country. PMID:21235394

  14. Systems view of adipogenesis via novel omics-driven and tissue-specific activity scoring of network functional modules

    NASA Astrophysics Data System (ADS)

    Nassiri, Isar; Lombardo, Rosario; Lauria, Mario; Morine, Melissa J.; Moyseos, Petros; Varma, Vijayalakshmi; Nolen, Greg T.; Knox, Bridgett; Sloper, Daniel; Kaput, Jim; Priami, Corrado

    2016-07-01

    The investigation of the complex processes involved in cellular differentiation must be based on unbiased, high throughput data processing methods to identify relevant biological pathways. A number of bioinformatics tools are available that can generate lists of pathways ranked by statistical significance (i.e. by p-value), while ideally it would be desirable to functionally score the pathways relative to each other or to other interacting parts of the system or process. We describe a new computational method (Network Activity Score Finder - NASFinder) to identify tissue-specific, omics-determined sub-networks and the connections with their upstream regulator receptors to obtain a systems view of the differentiation of human adipocytes. Adipogenesis of human SBGS pre-adipocyte cells in vitro was monitored with a transcriptomic data set comprising six time points (0, 6, 48, 96, 192, 384 hours). To elucidate the mechanisms of adipogenesis, NASFinder was used to perform time-point analysis by comparing each time point against the control (0 h) and time-lapse analysis by comparing each time point with the previous one. NASFinder identified the coordinated activity of seemingly unrelated processes between each comparison, providing the first systems view of adipogenesis in culture. NASFinder has been implemented into a web-based, freely available resource associated with novel, easy to read visualization of omics data sets and network modules.

  15. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning.

    PubMed

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-03-15

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  16. Information technology: changing nursing processes at the point-of-care.

    PubMed

    Courtney, Karen L; Demiris, George; Alexander, Greg L

    2005-01-01

    Changing societal demographics, increasing complexity in healthcare knowledge, and increasing nursing shortages have led healthcare strategists to call for a redesign of the healthcare system. Embedded within most redesign recommendations is the increased use of technology to make nursing practice more efficient. However, information technology (IT) has the potential to go beyond simple efficiency increases. If IT is perceived truly as a part of the redesign of healthcare delivery rather than simply the automation of existing processes, then it can change nursing processes within institutions and furthermore change the point-of-care between nurses and patients. Nursing adoption of technology within the workplace is a result of the interactions between technical skills, social acceptance, and workplace culture. Nursing needs for information not only influence their adoption of particular technologies but also shape their design. The objective of this article is to illustrate how IT can change not only nursing practice and processes but also the point-of-care. A case study of the use of IT by nurses in telehomecare is presented and administrative implications are discussed.

  17. Study on the high-frequency laser measurement of slot surface difference

    NASA Astrophysics Data System (ADS)

    Bing, Jia; Lv, Qiongying; Cao, Guohua

    2017-10-01

    In view of the measurement of the slot surface difference in the large-scale mechanical assembly process, Based on high frequency laser scanning technology and laser detection imaging principle, This paragraph designs a double galvanometer pulse laser scanning system. Laser probe scanning system architecture consists of three parts: laser ranging part, mechanical scanning part, data acquisition and processing part. The part of laser range uses high-frequency laser range finder to measure the distance information of the target shape and get a lot of point cloud data. Mechanical scanning part includes high-speed rotary table, high-speed transit and related structure design, in order to realize the whole system should be carried out in accordance with the design of scanning path on the target three-dimensional laser scanning. Data processing part mainly by FPGA hardware with LAbVIEW software to design a core, to process the point cloud data collected by the laser range finder at the high-speed and fitting calculation of point cloud data, to establish a three-dimensional model of the target, so laser scanning imaging is realized.

  18. Transport phenomena in helical edge state interferometers: A Green's function approach

    NASA Astrophysics Data System (ADS)

    Rizzo, Bruno; Arrachea, Liliana; Moskalets, Michael

    2013-10-01

    We analyze the current and the shot noise of an electron interferometer made of the helical edge states of a two-dimensional topological insulator within the framework of nonequilibrium Green's functions formalism. We study, in detail, setups with a single and with two quantum point contacts inducing scattering between the different edge states. We consider processes preserving the spin as well as the effect of spin-flip scattering. In the case of a single quantum point contact, a simple test based on the shot-noise measurement is proposed to quantify the strength of the spin-flip scattering. In the case of two single point contacts with the additional ingredient of gate voltages applied within a finite-size region at the top and bottom edges of the sample, we identify two types of interference processes in the behavior of the currents and the noise. One such process is analogous to that taking place in a Fabry-Pérot interferometer, while the second one corresponds to a configuration similar to a Mach-Zehnder interferometer. In the helical interferometer, these two processes compete.

  19. Inhomogeneous Point-Processes to Instantaneously Assess Affective Haptic Perception through Heartbeat Dynamics Information

    NASA Astrophysics Data System (ADS)

    Valenza, G.; Greco, A.; Citi, L.; Bianchi, M.; Barbieri, R.; Scilingo, E. P.

    2016-06-01

    This study proposes the application of a comprehensive signal processing framework, based on inhomogeneous point-process models of heartbeat dynamics, to instantaneously assess affective haptic perception using electrocardiogram-derived information exclusively. The framework relies on inverse-Gaussian point-processes with Laguerre expansion of the nonlinear Wiener-Volterra kernels, accounting for the long-term information given by the past heartbeat events. Up to cubic-order nonlinearities allow for an instantaneous estimation of the dynamic spectrum and bispectrum of the considered cardiovascular dynamics, as well as for instantaneous measures of complexity, through Lyapunov exponents and entropy. Short-term caress-like stimuli were administered for 4.3-25 seconds on the forearms of 32 healthy volunteers (16 females) through a wearable haptic device, by selectively superimposing two levels of force, 2 N and 6 N, and two levels of velocity, 9.4 mm/s and 65 mm/s. Results demonstrated that our instantaneous linear and nonlinear features were able to finely characterize the affective haptic perception, with a recognition accuracy of 69.79% along the force dimension, and 81.25% along the velocity dimension.

  20. Stairs or escalator? Using theories of persuasion and motivation to facilitate healthy decision making.

    PubMed

    Suri, Gaurav; Sheppes, Gal; Leslie, Sara; Gross, James J

    2014-12-01

    To encourage an increase in daily activity, researchers have tried a variety of health-related communications, but with mixed results. In the present research-using the stair escalator choice context-we examined predictions derived from the Heuristic Systematic Model (HSM), Self Determination Theory (SDT), and related theories. Specifically, we tested whether (as predicted by HSM) signs that encourage heuristic processing ("Take the Stairs") would have greatest impact when placed at the stair/escalator point of choice (when processing time is limited), whereas signs that encourage systematic processing ("Will You Take the Stairs?") would have greatest impact when placed at some distance from the point of choice (when processing time is less limited). We also tested whether (as predicted by SDT) messages promoting autonomy would be more likely to result in sustained motivated behavior (i.e., stair taking at subsequent uncued choice points) than messages that use commands. A series of studies involving more than 9,000 pedestrians provided support for these predictions. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  1. Automatic Detection and Classification of Pole-Like Objects for Urban Cartography Using Mobile Laser Scanning Data

    PubMed Central

    Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc

    2017-01-01

    Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189

  2. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    NASA Astrophysics Data System (ADS)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm for a hemispherical and a triangular wave point cloud.

  3. Collision Visualization of a Laser-Scanned Point Cloud of Streets and a Festival Float Model Used for the Revival of a Traditional Procession Route

    NASA Astrophysics Data System (ADS)

    Li, W.; Shigeta, K.; Hasegawa, K.; Li, L.; Yano, K.; Tanaka, S.

    2017-09-01

    Recently, laser-scanning technology, especially mobile mapping systems (MMSs), has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1) a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2) the ability to visualize areas of high collision risk as well as real collision areas, and (3) the ability to highlight target visualized areas by increasing the point densities there.

  4. An ArcGIS approach to include tectonic structures in point data regionalization.

    PubMed

    Darsow, Andreas; Schafmeister, Maria-Theresia; Hofmann, Thilo

    2009-01-01

    Point data derived from drilling logs must often be regionalized. However, aquifers may show discontinuous surface structures, such as the offset of an aquitard caused by tectonic faults. One main challenge has been to incorporate these structures into the regionalization process of point data. We combined ordinary kriging and inverse distance weighted (IDW) interpolation to account for neotectonic structures in the regionalization process. The study area chosen to test this approach is the largest porous aquifer in Austria. It consists of three basins formed by neotectonic events and delimited by steep faults with a vertical offset of the aquitard up to 70 m within very short distances. First, ordinary kriging was used to incorporate the characteristic spatial variability of the aquitard location by means of a variogram. The tectonic faults could be included into the regionalization process by using breaklines with buffer zones. All data points inside the buffer were deleted. Last, IDW was performed, resulting in an aquitard map representing the discontinuous surface structures. This approach enables one to account for such surfaces using the standard software package ArcGIS; therefore, it could be adopted in many practical applications.

  5. A Unified Point Process Probabilistic Framework to Assess Heartbeat Dynamics and Autonomic Cardiovascular Control

    PubMed Central

    Chen, Zhe; Purdon, Patrick L.; Brown, Emery N.; Barbieri, Riccardo

    2012-01-01

    In recent years, time-varying inhomogeneous point process models have been introduced for assessment of instantaneous heartbeat dynamics as well as specific cardiovascular control mechanisms and hemodynamics. Assessment of the model’s statistics is established through the Wiener-Volterra theory and a multivariate autoregressive (AR) structure. A variety of instantaneous cardiovascular metrics, such as heart rate (HR), heart rate variability (HRV), respiratory sinus arrhythmia (RSA), and baroreceptor-cardiac reflex (baroreflex) sensitivity (BRS), are derived within a parametric framework and instantaneously updated with adaptive and local maximum likelihood estimation algorithms. Inclusion of second-order non-linearities, with subsequent bispectral quantification in the frequency domain, further allows for definition of instantaneous metrics of non-linearity. We here present a comprehensive review of the devised methods as applied to experimental recordings from healthy subjects during propofol anesthesia. Collective results reveal interesting dynamic trends across the different pharmacological interventions operated within each anesthesia session, confirming the ability of the algorithm to track important changes in cardiorespiratory elicited interactions, and pointing at our mathematical approach as a promising monitoring tool for an accurate, non-invasive assessment in clinical practice. We also discuss the limitations and other alternative modeling strategies of our point process approach. PMID:22375120

  6. Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans

    NASA Astrophysics Data System (ADS)

    Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj

    2016-06-01

    This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.

  7. Analysis of residual stress state in sheet metal parts processed by single point incremental forming

    NASA Astrophysics Data System (ADS)

    Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.

    2018-05-01

    The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.

  8. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  9. 7 CFR 4284.912 - Evaluation process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of all applications in rank order, together with funding level recommendations. (c) The Administrator... the total points of the original score. (d) After giving effect to the Administrator's point awards... considered for funding in subsequent competitions in the same fiscal year. ...

  10. 7 CFR 4284.912 - Evaluation process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... of all applications in rank order, together with funding level recommendations. (c) The Administrator... the total points of the original score. (d) After giving effect to the Administrator's point awards... considered for funding in subsequent competitions in the same fiscal year. ...

  11. Estimation of Traffic Variables Using Point Processing Techniques

    DOT National Transportation Integrated Search

    1978-05-01

    An alternative approach to estimating aggregate traffic variables on freeways--spatial mean velocity and density--is presented. Vehicle arrival times at a given location on a roadway, typically a presence detector, are regarded as a point or counting...

  12. Pulse Detonation Physiochemical and Exhaust Relaxation Processes

    DTIC Science & Technology

    2006-05-01

    based on total time to detonation and detonation percentage. Nomenclature A = Arrehenius Constant Ea = Activation Energy Ecrit = Critical...the precision uncertainties vary for each data point. Therefore, the total experimental uncertainty will vary by data point. A comprehensive bias

  13. Attitude Determination and Control Systems

    NASA Technical Reports Server (NTRS)

    Starin, Scott R.; Eterno, John

    2010-01-01

    The importance of accurately pointing spacecraft to our daily lives is pervasive, yet somehow escapes the notice of most people. In this section, we will summarize the processes and technologies used in designing and operating spacecraft pointing (i.e. attitude) systems.

  14. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  15. Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring

    NASA Astrophysics Data System (ADS)

    Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank

    2018-04-01

    Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.

  16. A color gamut description algorithm for liquid crystal displays in CIELAB space.

    PubMed

    Sun, Bangyong; Liu, Han; Li, Wenli; Zhou, Shisheng

    2014-01-01

    Because the accuracy of gamut boundary description is significant for gamut mapping process, a gamut boundary calculating method for LCD monitors is proposed in this paper. Within most of the previous gamut boundary calculation algorithms, the gamut boundary is calculated in CIELAB space directly, and part of inside-gamut points are mistaken for the boundary points. While, in the new proposed algorithm, the points on the surface of RGB cube are selected as the boundary points, and then converted and described in CIELAB color space. Thus, in our algorithm, the true gamut boundary points are found and a more accurate gamut boundary is described. In experiment, a Toshiba LCD monitor's 3D CIELAB gamut for evaluation is firstly described which has regular-shaped outer surface, and then two 2D gamut boundaries ( CIE-a*b* boundary and CIE-C*L* boundary) are calculated which are often used in gamut mapping process. When our algorithm is compared with several famous gamut calculating algorithms, the gamut volumes are very close, which indicates that our algorithm's accuracy is precise and acceptable.

  17. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points - A Review.

    PubMed

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram.

  18. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    PubMed Central

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram. PMID:22399966

  19. A Color Gamut Description Algorithm for Liquid Crystal Displays in CIELAB Space

    PubMed Central

    Sun, Bangyong; Liu, Han; Li, Wenli; Zhou, Shisheng

    2014-01-01

    Because the accuracy of gamut boundary description is significant for gamut mapping process, a gamut boundary calculating method for LCD monitors is proposed in this paper. Within most of the previous gamut boundary calculation algorithms, the gamut boundary is calculated in CIELAB space directly, and part of inside-gamut points are mistaken for the boundary points. While, in the new proposed algorithm, the points on the surface of RGB cube are selected as the boundary points, and then converted and described in CIELAB color space. Thus, in our algorithm, the true gamut boundary points are found and a more accurate gamut boundary is described. In experiment, a Toshiba LCD monitor's 3D CIELAB gamut for evaluation is firstly described which has regular-shaped outer surface, and then two 2D gamut boundaries ( CIE-a*b* boundary and CIE-C*L* boundary) are calculated which are often used in gamut mapping process. When our algorithm is compared with several famous gamut calculating algorithms, the gamut volumes are very close, which indicates that our algorithm's accuracy is precise and acceptable. PMID:24892068

  20. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds

    Treesearch

    Elias Ayrey; Shawn Fraver; John A. Kershaw; Laura S. Kenefic; Daniel Hayes; Aaron R. Weiskittel; Brian E. Roth

    2017-01-01

    As light detection and ranging (LiDAR) technology advances, it has become common for datasets to be acquired at a point density high enough to capture structural information from individual trees. To process these data, an automatic method of isolating individual trees from a LiDAR point cloud is required. Traditional methods for segmenting trees attempt to isolate...

  1. The possibilities of improvement in the sensitivity of cancer fluorescence diagnostics by computer image processing

    NASA Astrophysics Data System (ADS)

    Ledwon, Aleksandra; Bieda, Robert; Kawczyk-Krupka, Aleksandra; Polanski, Andrzej; Wojciechowski, Konrad; Latos, Wojciech; Sieron-Stoltny, Karolina; Sieron, Aleksander

    2008-02-01

    Background: Fluorescence diagnostics uses the ability of tissues to fluoresce after exposition to a specific wavelength of light. The change in fluorescence between normal and progression to cancer allows to see early cancer and precancerous lesions often missed by white light. Aim: To improve by computer image processing the sensitivity of fluorescence images obtained during examination of skin, oral cavity, vulva and cervix lesions, during endoscopy, cystoscopy and bronchoscopy using Xillix ONCOLIFE. Methods: Function of image f(x,y):R2 --> R 3 was transformed from original color space RGB to space in which vector of 46 values refers to every point labeled by defined xy-coordinates- f(x,y):R2 --> R 46. By means of Fisher discriminator vector of attributes of concrete point analalyzed in the image was reduced according to two defined classes defined as pathologic areas (foreground) and healthy areas (background). As a result the highest four fisher's coefficients allowing the greatest separation between points of pathologic (foreground) and healthy (background) areas were chosen. In this way new function f(x,y):R2 --> R 4 was created in which point x,y corresponds with vector Y, H, a*, c II. In the second step using Gaussian Mixtures and Expectation-Maximisation appropriate classificator was constructed. This classificator enables determination of probability that the selected pixel of analyzed image is a pathologically changed point (foreground) or healthy one (background). Obtained map of probability distribution was presented by means of pseudocolors. Results: Image processing techniques improve the sensitivity, quality and sharpness of original fluorescence images. Conclusion: Computer image processing enables better visualization of suspected areas examined by means of fluorescence diagnostics.

  2. Direct Georeferencing on Small Unmanned Aerial Platforms for Improved Reliability and Accuracy of Mapping Without the Need for Ground Control Points

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2015-08-01

    This paper presents results from a Direct Mapping Solution (DMS) comprised of an Applanix APX-15 UAV GNSS-Inertial system integrated with a Sony a7R camera to produce highly accurate ortho-rectified imagery without Ground Control Points on a Microdrones md4-1000 platform. A 55 millimeter Nikkor f/1.8 lens was mounted on the Sony a7R and the camera was then focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 UAV GNSS-Inertial system using a custom mount specifically designed for UAV applications. In July 2015, Applanix and Avyon carried out a test flight of this system. The goal of the test flight was to assess the performance of DMS APX-15 UAV direct georeferencing system on the md4-1000. The area mapped during the test was a 250 x 300 meter block in a rural setting in Ontario, Canada. Several ground control points are distributed within the test area. The test included 8 North-South lines and 1 cross strip flown at 80 meters AGL, resulting in a ~1 centimeter Ground Sample Distance (GSD). Map products were generated from the test flight using Direct Georeferencing, and then compared for accuracy against the known positions of ground control points in the test area. The GNSS-Inertial data collected by the APX-15 UAV was post-processed in Single Base mode, using a base station located in the project area via POSPac UAV. The base-station's position was precisely determined by processing a 12-hour session using the CSRS-PPP Post Processing service. The ground control points were surveyed in using differential GNSS post-processing techniques with respect to the base-station.

  3. Curvature-correction-based time-domain CMOS smart temperature sensor with an inaccuracy of -0.8 °C-1.2 °C after one-point calibration from -40 °C to 120 °C

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi

    2014-06-01

    This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.

  4. Electrochemical formation of field emitters

    DOEpatents

    Bernhardt, Anthony F.

    1999-01-01

    Electrochemical formation of field emitters, particularly useful in the fabrication of flat panel displays. The fabrication involves field emitting points in a gated field emitter structure. Metal field emitters are formed by electroplating and the shape of the formed emitter is controlled by the potential imposed on the gate as well as on a separate counter electrode. This allows sharp emitters to be formed in a more inexpensive and manufacturable process than vacuum deposition processes used at present. The fabrication process involves etching of the gate metal and the dielectric layer down to the resistor layer, and then electroplating the etched area and forming an electroplated emitter point in the etched area.

  5. Design and fabrication of a diffractive beam splitter for dual-wavelength and concurrent irradiation of process points.

    PubMed

    Amako, Jun; Shinozaki, Yu

    2016-07-11

    We report on a dual-wavelength diffractive beam splitter designed for use in parallel laser processing. This novel optical element generates two beam arrays of different wavelengths and allows their overlap at the process points on a workpiece. To design the deep surface-relief profile of a splitter using a simulated annealing algorithm, we introduce a heuristic but practical scheme to determine the maximum depth and the number of quantization levels. The designed corrugations were fabricated in a photoresist by maskless grayscale exposure using a high-resolution spatial light modulator. We characterized the photoresist splitter, thereby validating the proposed beam-splitting concept.

  6. Use of strategic environmental assessment in the site selection process for a radioactive waste disposal facility in Slovenia.

    PubMed

    Dermol, Urška; Kontić, Branko

    2011-01-01

    The benefits of strategic environmental considerations in the process of siting a repository for low- and intermediate-level radioactive waste (LILW) are presented. The benefits have been explored by analyzing differences between the two site selection processes. One is a so-called official site selection process, which is implemented by the Agency for radwaste management (ARAO); the other is an optimization process suggested by experts working in the area of environmental impact assessment (EIA) and land-use (spatial) planning. The criteria on which the comparison of the results of the two site selection processes has been based are spatial organization, environmental impact, safety in terms of potential exposure of the population to radioactivity released from the repository, and feasibility of the repository from the technical, financial/economic and social point of view (the latter relates to consent by the local community for siting the repository). The site selection processes have been compared with the support of the decision expert system named DEX. The results of the comparison indicate that the sites selected by ARAO meet fewer suitability criteria than those identified by applying strategic environmental considerations in the framework of the optimization process. This result stands when taking into account spatial, environmental, safety and technical feasibility points of view. Acceptability of a site by a local community could not have been tested, since the formal site selection process has not yet been concluded; this remains as an uncertain and open point of the comparison. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. 40 CFR 408.260 - Applicability; description of the Atlantic and Gulf Coast hand-shucked oyster processing...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Atlantic and Gulf Coast hand-shucked oyster processing subcategory. 408.260 Section 408.260 Protection of... SEAFOOD PROCESSING POINT SOURCE CATEGORY Atlantic and Gulf Coast Hand-Shucked Oyster Processing Subcategory § 408.260 Applicability; description of the Atlantic and Gulf Coast hand-shucked oyster processing...

  8. 40 CFR 408.120 - Applicability; description of the Southern non-breaded shrimp processing in the contiguous States...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Southern non-breaded shrimp processing in the contiguous States subcategory. 408.120 Section 408.120... CANNED AND PRESERVED SEAFOOD PROCESSING POINT SOURCE CATEGORY Southern Non-Breaded Shrimp Processing in... shrimp processing in the contiguous States subcategory. The provisions of this subpart are applicable to...

  9. 40 CFR 408.120 - Applicability; description of the Southern non-breaded shrimp processing in the contiguous States...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Southern non-breaded shrimp processing in the contiguous States subcategory. 408.120 Section 408.120... CANNED AND PRESERVED SEAFOOD PROCESSING POINT SOURCE CATEGORY Southern Non-Breaded Shrimp Processing in... shrimp processing in the contiguous States subcategory. The provisions of this subpart are applicable to...

  10. 40 CFR 408.40 - Applicability; description of the non-remote Alaskan crab meat processing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-remote Alaskan crab meat processing subcategory. 408.40 Section 408.40 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Remote Alaskan Crab Meat Processing Subcategory § 408.40 Applicability; description of the non-remote Alaskan crab meat processing subcategory. The provisions of this subpart are...

  11. 40 CFR 408.40 - Applicability; description of the non-remote Alaskan crab meat processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-remote Alaskan crab meat processing subcategory. 408.40 Section 408.40 Protection of Environment... PROCESSING POINT SOURCE CATEGORY Non-Remote Alaskan Crab Meat Processing Subcategory § 408.40 Applicability; description of the non-remote Alaskan crab meat processing subcategory. The provisions of this subpart are...

  12. 24 CFR 1003.301 - Selection process.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Selection process. 1003.301 Section... Application and Selection Process § 1003.301 Selection process. (a) Threshold requirement. An applicant that... establish weights for the selection criteria, will specify the maximum points available, and will describe...

  13. 40 CFR 409.40 - Applicability; description of the Louisiana raw cane sugar processing subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Louisiana raw cane sugar processing subcategory. 409.40 Section 409.40 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Louisiana Raw Cane Sugar Processing Subcategory § 409.40 Applicability; description of the...

  14. 40 CFR 409.40 - Applicability; description of the Louisiana raw cane sugar processing subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Louisiana raw cane sugar processing subcategory. 409.40 Section 409.40 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Louisiana Raw Cane Sugar Processing Subcategory § 409.40 Applicability; description of the...

  15. 40 CFR 409.70 - Applicability; description of the Hawaiian raw cane sugar processing subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Hawaiian raw cane sugar processing subcategory. 409.70 Section 409.70 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Hawaiian Raw Cane Sugar Processing Subcategory § 409.70 Applicability; description of the Hawaiian...

  16. 40 CFR 409.40 - Applicability; description of the Louisiana raw cane sugar processing subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Louisiana raw cane sugar processing subcategory. 409.40 Section 409.40 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Louisiana Raw Cane Sugar Processing Subcategory § 409.40 Applicability; description of the...

  17. 40 CFR 409.70 - Applicability; description of the Hawaiian raw cane sugar processing subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Hawaiian raw cane sugar processing subcategory. 409.70 Section 409.70 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Hawaiian Raw Cane Sugar Processing Subcategory § 409.70 Applicability; description of the Hawaiian...

  18. 40 CFR 409.70 - Applicability; description of the Hawaiian raw cane sugar processing subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Hawaiian raw cane sugar processing subcategory. 409.70 Section 409.70 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Hawaiian Raw Cane Sugar Processing Subcategory § 409.70 Applicability; description of the Hawaiian...

  19. 40 CFR 410.30 - Applicability; description of the low water use processing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... water use processing subcategory. 410.30 Section 410.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TEXTILE MILLS POINT SOURCE CATEGORY Low Water Use Processing Subcategory § 410.30 Applicability; description of the low water use processing...

  20. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    NASA Astrophysics Data System (ADS)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  1. 75 FR 65373 - Drakes Bay Oyster Company Special Use Permit/Environmental Impact Statement, Point Reyes National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ..., Point Reyes National Seashore, California. Pursuant to section 124 of Public Law 111-88, the Secretary... continuing the commercial operation within the national seashore. The results of the NEPA process will be...

  2. CBEFF Common Biometric Exchange File Format

    DTIC Science & Technology

    2001-01-03

    and systems. Points of contact for CBEFF and liaisons to other organizations can be found in Appendix F. 2. Purpose The purpose of CBEFF is...0x40 Signature Dynamics 0x80 Keystroke Dynamics 0x100 Lip Movement 0x200 Thermal Face Image 0x400 Thermal Hand Image 0x800 Gait 0x1000 Body...this process is negligible from the Biometric Objects point of view, unless the process creating the livescan sample to compare against the

  3. Method for reducing energy losses in laser crystals

    DOEpatents

    Atherton, L.J.; DeYoreo, J.J.; Roberts, D.H.

    1992-03-24

    A process for reducing energy losses in crystals is disclosed which comprises: a. heating a crystal to a temperature sufficiently high as to cause dissolution of microscopic inclusions into the crystal, thereby converting said inclusions into point-defects, and b. maintaining said crystal at a given temperature for a period of time sufficient to cause said point-defects to diffuse out of said crystal. Also disclosed are crystals treated by the process, and lasers utilizing the crystals as a source of light. 12 figs.

  4. Method for reducing energy losses in laser crystals

    DOEpatents

    Atherton, L. Jeffrey; DeYoreo, James J.; Roberts, David H.

    1992-01-01

    A process for reducing energy losses in crystals is disclosed which comprises: a. heating a crystal to a temperature sufficiently high as to cause dissolution of microscopic inclusions into the crystal, thereby converting said inclusions into point-defects, and b. maintaining said crystal at a given temperature for a period of time sufficient to cause said point-defects to diffuse out of said crystal. Also disclosed are crystals treated by the process, and lasers utilizing the crystals as a source of light.

  5. KENNEDY SPACE CENTER, FLA. - In the Space Station Processing Facility, Executive Director of NASDA Koji Yamamoto points to other Space Station elements. Behind him is the Japanese Experiment Module (JEM)/pressurized module. Mr. Yamamoto is at KSC for a welcome ceremony involving the arrival of JEM.

    NASA Image and Video Library

    2003-06-12

    KENNEDY SPACE CENTER, FLA. - In the Space Station Processing Facility, Executive Director of NASDA Koji Yamamoto points to other Space Station elements. Behind him is the Japanese Experiment Module (JEM)/pressurized module. Mr. Yamamoto is at KSC for a welcome ceremony involving the arrival of JEM.

  6. The signer and the sign: cortical correlates of person identity and language processing from point-light displays.

    PubMed

    Campbell, Ruth; Capek, Cheryl M; Gazarian, Karine; MacSweeney, Mairéad; Woll, Bencie; David, Anthony S; McGuire, Philip K; Brammer, Michael J

    2011-09-01

    In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. Many of the regions activated by point-light under these conditions replicated those previously reported for full-image displays, including regions within the inferior temporal cortex that are specialised for face and body-part identification, although such body parts were invisible in the display. Right frontal regions were also recruited - a pattern not usually seen in full-image SL processing. This activation may reflect the recruitment of information about person identity from the reduced display. A direct comparison of identify-signer and identify-sign conditions showed these tasks relied to a different extent on the posterior inferior regions. Signer identification elicited greater activation than sign identification in (bilateral) inferior temporal gyri (BA 37/19), fusiform gyri (BA 37), middle and posterior portions of the middle temporal gyri (BAs 37 and 19), and superior temporal gyri (BA 22 and 42). Right inferior frontal cortex was a further focus of differential activation (signer>sign). These findings suggest that the neural systems supporting point-light displays for the processing of SL rely on a cortical network including areas of the inferior temporal cortex specialized for face and body identification. While this might be predicted from other studies of whole body point-light actions (Vaina, Solomon, Chowdhury, Sinha, & Belliveau, 2001) it is not predicted from the perspective of spoken language processing, where voice characteristics and speech content recruit distinct cortical regions (Stevens, 2004) in addition to a common network. In this respect, our findings contrast with studies of voice/speech recognition (Von Kriegstein, Kleinschmidt, Sterzer, & Giraud, 2005). Inferior temporal regions associated with the visual recognition of a person appear to be required during SL processing, for both carrier and content information. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  7. Trade-off analysis of modes of data handling for earth resources (ERS), volume 1

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Data handling requirements are reviewed for earth observation missions along with likely technology advances. Parametric techniques for synthesizing potential systems are developed. Major tasks include: (1) review of the sensors under development and extensions of or improvements in these sensors; (2) development of mission models for missions spanning land, ocean, and atmosphere observations; (3) summary of data handling requirements including the frequency of coverage, timeliness of dissemination, and geographic relationships between points of collection and points of dissemination; (4) review of data routing to establish ways of getting data from the collection point to the user; (5) on-board data processing; (6) communications link; and (7) ground data processing. A detailed synthesis of three specific missions is included.

  8. The implementation of a Hazard Analysis and Critical Control Point management system in a peanut butter ice cream plant.

    PubMed

    Hung, Yu-Ting; Liu, Chi-Te; Peng, I-Chen; Hsu, Chin; Yu, Roch-Chui; Cheng, Kuan-Chen

    2015-09-01

    To ensure the safety of the peanut butter ice cream manufacture, a Hazard Analysis and Critical Control Point (HACCP) plan has been designed and applied to the production process. Potential biological, chemical, and physical hazards in each manufacturing procedure were identified. Critical control points for the peanut butter ice cream were then determined as the pasteurization and freezing process. The establishment of a monitoring system, corrective actions, verification procedures, and documentation and record keeping were followed to complete the HACCP program. The results of this study indicate that implementing the HACCP system in food industries can effectively enhance food safety and quality while improving the production management. Copyright © 2015. Published by Elsevier B.V.

  9. Tracking prominent points in image sequences

    NASA Astrophysics Data System (ADS)

    Hahn, Michael

    1994-03-01

    Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.

  10. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  11. Recognition method of construction conflict based on driver's eye movement.

    PubMed

    Xu, Yi; Li, Shiwu; Gao, Song; Tan, Derong; Guo, Dong; Wang, Yuqiong

    2018-04-01

    Drivers eye movement data in simulated construction conflicts at different speeds were collected and analyzed to find the relationship between the drivers' eye movement and the construction conflict. On the basis of the relationship between the drivers' eye movement and the construction conflict, the peak point of wavelet processed pupil diameter, the first point on the left side of the peak point and the first blink point after the peak point are selected as key points for locating construction conflict periods. On the basis of the key points and the GSA, a construction conflict recognition method so called the CCFRM is proposed. And the construction conflict recognition speed and location accuracy of the CCFRM are verified. The good performance of the CCFRM verified the feasibility of proposed key points in construction conflict recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Effect of the secondary process on mass point vibration velocity propagation in magneto-acoustic tomography and magneto-acousto-electrical tomography.

    PubMed

    Sun, Zhishen; Liu, Guoqiang; Guo, Liang; Xia, Hui; Wang, Xinli

    2016-04-29

    As two of the new biological electrical impedance tomography (EIT), magneto-acoustic tomography (MAT) and magneto-acousto-electrical tomography (MAET) achieve both the high contrast property of EIT and the high spatial resolution property of sonography through combining EIT and sonography. As both MAT and MAET contain a uniform magnetic field, vibration and electrical current density, there is a secondary process both in MAT and in MAET, which is MAET and MAT respectively. To analyze the effect of the secondary process on mass point vibration velocity (MPVV) propagation in MAT and MAET. By analyzing the total force to the sample, the wave equations of MPVV in MAT and MAET - when the secondary processes were considered - were derived. The expression of the attenuation constant in the wave number was derived in the case that the mass point vibration velocity propagates in the form of cylindrical wave and plane wave. Attenuations of propagation of the MPVV in several samples were quantified. Attenuations of the MPVV after propagating for 1 mm in copper or aluminum foil, and for 5 cm in gel phantom or biological soft tissue were less than 1%. Attenuations of the MPVV in MAT and MAET due to the secondary processes are relatively minor, and effects of the secondary processes on MPVV propagation in MAT and MAET can be ignored.

  13. 40 CFR 409.80 - Applicability; description of the Puerto Rican raw cane sugar processing subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Puerto Rican raw cane sugar processing subcategory. 409.80 Section 409.80 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Puerto Rican Raw Cane Sugar Processing Subcategory § 409.80 Applicability; description of the...

  14. 40 CFR 409.80 - Applicability; description of the Puerto Rican raw cane sugar processing subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Puerto Rican raw cane sugar processing subcategory. 409.80 Section 409.80 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Puerto Rican Raw Cane Sugar Processing Subcategory § 409.80 Applicability; description of the...

  15. 40 CFR 409.80 - Applicability; description of the Puerto Rican raw cane sugar processing subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Puerto Rican raw cane sugar processing subcategory. 409.80 Section 409.80 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Puerto Rican Raw Cane Sugar Processing Subcategory § 409.80 Applicability; description of the...

  16. Robust and efficient overset grid assembly for partitioned unstructured meshes

    NASA Astrophysics Data System (ADS)

    Roget, Beatrice; Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.

  17. Robust Prediction for Stationary Processes. 2D Enriched Version.

    DTIC Science & Technology

    1987-11-24

    the absence of data outliers. Important performance characteristics studied include the breakdown point and the influence function . Included are numerical results, for some autoregressive nominal processes.

  18. 40 CFR 429.130 - Applicability; description of the finishing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., fabrication, and by-product utilization timber processing operations not otherwise covered by specific... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TIMBER PRODUCTS PROCESSING POINT SOURCE CATEGORY Finishing...

  19. Competing spreading processes on multiplex networks: awareness and epidemics.

    PubMed

    Granell, Clara; Gómez, Sergio; Arenas, Alex

    2014-07-01

    Epidemiclike spreading processes on top of multilayered interconnected complex networks reveal a rich phase diagram of intertwined competition effects. A recent study by the authors [C. Granell et al., Phys. Rev. Lett. 111, 128701 (2013).] presented an analysis of the interrelation between two processes accounting for the spreading of an epidemic, and the spreading of information awareness to prevent infection, on top of multiplex networks. The results in the case in which awareness implies total immunization to the disease revealed the existence of a metacritical point at which the critical onset of the epidemics starts, depending on completion of the awareness process. Here we present a full analysis of these critical properties in the more general scenario where the awareness spreading does not imply total immunization, and where infection does not imply immediate awareness of it. We find the critical relation between the two competing processes for a wide spectrum of parameters representing the interaction between them. We also analyze the consequences of a massive broadcast of awareness (mass media) on the final outcome of the epidemic incidence. Importantly enough, the mass media make the metacritical point disappear. The results reveal that the main finding, i.e., existence of a metacritical point, is rooted in the competition principle and holds for a large set of scenarios.

  20. [Applications and prospects of on-line near infrared spectroscopy technology in manufacturing of Chinese materia medica].

    PubMed

    Li, Yang; Wu, Zhi-Sheng; Pan, Xiao-Ning; Shi, Xin-Yuan; Guo, Ming-Ye; Xu, Bing; Qiao, Yan-Jiang

    2014-10-01

    The quality of Chinese materia medica (CMM) is affected by every process in CMM manufacturing. According to multi-unit complex features in the production of CMM, on-line near infrared spectroscopy (NIR) is used as an evaluating technology with its rapid, non-destructive and non-pollution etc. advantages. With the research in institutions, the on-line NIR applied in process analysis and control of CMM was described systematically, and the on-line NIR platform building was used as an example to clarify the feasibility of on-line NIR technology in CMM manufacturing process. Then, from the point of application by pharmaceutical companies, the current on-line NIR research on CMM and its production in pharmaceutical companies was relatively comprehensively summarized. Meanwhile, the types of CMM productions were classified in accordance with two formulations (liquid and solid dosage formulations). The different production processes (extraction, concentration and alcohol precipitation, etc. ) were used as liquid formulation diacritical points; the different types (tablets, capsules and plasters, etc.) were used as solid dosage formulation diacritical points, and the reliability of on-line NIR used in the whole process in CMM production was proved in according to the summary of literatures in recent 10 years, which could support the modernization of CMM production.

  1. A Comparative Study of Point Cloud Data Collection and Processing

    NASA Astrophysics Data System (ADS)

    Pippin, J. E.; Matheney, M.; Gentle, J. N., Jr.; Pierce, S. A.; Fuentes-Pineda, G.

    2016-12-01

    Over the past decade, there has been dramatic growth in the acquisition of publicly funded high-resolution topographic data for scientific, environmental, engineering and planning purposes. These data sets are valuable for applications of interest across a large and varied user community. However, because of the large volumes of data produced by high-resolution mapping technologies and expense of aerial data collection, it is often difficult to collect and distribute these datasets. Furthermore, the data can be technically challenging to process, requiring software and computing resources not readily available to many users. This study presents a comparison of advanced computing hardware and software that is used to collect and process point cloud datasets, such as LIDAR scans. Activities included implementation and testing of open source libraries and applications for point cloud data processing such as, Meshlab, Blender, PDAL, and PCL. Additionally, a suite of commercial scale applications, Skanect and Cloudcompare, were applied to raw datasets. Handheld hardware solutions, a Structure Scanner and Xbox 360 Kinect V1, were tested for their ability to scan at three field locations. The resultant data projects successfully scanned and processed subsurface karst features ranging from small stalactites to large rooms, as well as a surface waterfall feature. Outcomes support the feasibility of rapid sensing in 3D at field scales.

  2. Arithmetic strategy development and its domain-specific and domain-general cognitive correlates: a longitudinal study in children with persistent mathematical learning difficulties.

    PubMed

    Vanbinst, Kiran; Ghesquière, Pol; De Smedt, Bert

    2014-11-01

    Deficits in arithmetic fact retrieval constitute the hallmark of children with mathematical learning difficulties (MLD). It remains, however, unclear which cognitive deficits underpin these difficulties in arithmetic fact retrieval. Many prior studies defined MLD by considering low achievement criteria and not by additionally taking the persistence of the MLD into account. Therefore, the present longitudinal study contrasted children with persistent MLD (MLD-p; mean age: 9 years 2 months) and typically developing (TD) children (mean age: 9 years 6 months) at three time points, to explore whether differences in arithmetic strategy development were associated with differences in numerical magnitude processing, working memory and phonological processing. Our longitudinal data revealed that children with MLD-p had persistent arithmetic fact retrieval deficits at each time point. Children with MLD-p showed persistent impairments in symbolic, but not in nonsymbolic, magnitude processing at each time point. The two groups differed in phonological processing, but not in working memory. Our data indicate that both domain-specific and domain-general cognitive abilities contribute to individual differences in children's arithmetic strategy development, and that the symbolic processing of numerical magnitudes might be a particular risk factor for children with MLD-p. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. 40 CFR Appendix A to Part 419 - Processes Included in the Determination of BAT Effluent Limitations for Total Chromium...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Hydrotreating Asphalt Processes 18. Asphalt Production 32. 200 °F Softening Point Unfluxed Asphalt 43. Asphalt Oxidizing 89. Asphalt Emulsifying Lube Processes 21. Hydrofining, Hydrofinishing, Lube Hydrofining 22. White...

  4. Technology of welding aluminum alloys-II

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Step-by-step procedures were developed for high integrity manual and machine welding of aluminum alloys. Detailed instructions are given for each step with tables and graphs to specify materials and dimensions. Throughout work sequence, processing procedure designates manufacturing verification points and inspection points.

  5. Mercury Contaminated Sediment Sites: A Review Of Remedial Solutions

    EPA Science Inventory

    Mercury (Hg) can accumulate in sediment from point and non-point sources, depending on a number of physical, chemical, biological, geological and anthropogenic environmental processes. It is believed that the associated Hg contamination in aquatic systems can be decreased by imp...

  6. The guidance methodology of a new automatic guided laser theodolite system

    NASA Astrophysics Data System (ADS)

    Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua

    2008-12-01

    Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.

  7. Mass measurement of ^80Y by β-γ coincidence spectroscopy

    NASA Astrophysics Data System (ADS)

    Brenner, Daeg; Zamfir, Victor; Berant, Zvi; Wolf, Alex; Barton, Charles; Caprio, Mark; Casten, Rick; Beausang, Con; Krücken, Reiner; Pietralla, Norbert; Cooper, Jeff; Novak, John; Aprahamian, Ani; Shawcross, Mark; Teymurazyan, Artur; Wiescher, Michael; Gill, Ron

    2002-10-01

    The rp-process has been proposed to account for the nucleosynthesis and terrestrial isotopic abundances of proton-rich nuclei. The path and termination point for this process above ^56Ni is uncertain due to our limited knowledge of nuclear properties, especially masses, near the proton drip line. ^80Y, the β-decay daughter of the waiting-point nucleus ^80Zr, was produced by bombardment of a ^58Ni target with 115 MeV ^28Si at the WNSL, Yale University. Recoil atoms were collected and transported to a shielded environment were β-γ coincidence decay measurements were made using a planar array of 4 clover Ge γ-ray detectors and a plastic scintillator β-ray detector. β-spectrum end-point energies were used to determine a Q_EC value for decay to ^80Sr. Results for ^80Y will be compared with other measurements, that vary over a range of ˜2 MeV, and with Audi-Wapstra systematics. Implications for the rp-process will be discussed.

  8. Real-time monitoring and massive inversion of source parameters of very long period seismic signals: An application to Stromboli Volcano, Italy

    USGS Publications Warehouse

    Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.

    2006-01-01

    We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.

  9. Elementary model of severe plastic deformation by KoBo process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusak, A.; Storozhuk, N.; Danielewski, M., E-mail: daniel@agh.edu.pl

    2014-01-21

    Self-consistent model of generation, interaction, and annihilation of point defects in the gradient of oscillating stresses is presented. This model describes the recently suggested method of severe plastic deformation by combination of pressure and oscillating rotations of the die along the billet axis (KoBo process). Model provides the existence of distinct zone of reduced viscosity with sharply increased concentration of point defects. This zone provides the high extrusion velocity. Presented model confirms that the Severe Plastic Deformation (SPD) in KoBo may be treated as non-equilibrium phase transition of abrupt drop of viscosity in rather well defined spatial zone. In thismore » very zone, an intensive lateral rotational movement proceeds together with generation of point defects which in self-organized manner make rotation possible by the decrease of viscosity. The special properties of material under KoBo version of SPD can be described without using the concepts of nonequilibrium grain boundaries, ballistic jumps and amorphization. The model can be extended to include different SPD processes.« less

  10. Hiding Techniques for Dynamic Encryption Text based on Corner Point

    NASA Astrophysics Data System (ADS)

    Abdullatif, Firas A.; Abdullatif, Alaa A.; al-Saffar, Amna

    2018-05-01

    Hiding technique for dynamic encryption text using encoding table and symmetric encryption method (AES algorithm) is presented in this paper. The encoding table is generated dynamically from MSB of the cover image points that used as the first phase of encryption. The Harris corner point algorithm is applied on cover image to generate the corner points which are used to generate dynamic AES key to second phase of text encryption. The embedded process in the LSB for the image pixels except the Harris corner points for more robust. Experimental results have demonstrated that the proposed scheme have embedding quality, error-free text recovery, and high value in PSNR.

  11. a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud

    NASA Astrophysics Data System (ADS)

    Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.

    2018-04-01

    Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.

  12. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    DOEpatents

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  13. Addressing Point of Need in Interactive Multimedia Instruction: A Conceptual Review and Evaluation

    DTIC Science & Technology

    2013-11-01

    classroom setting, ability grouping refers to the practice of putting students into groups on the basis of individual group members’ ability levels...presentation of elaborated/basic vs. advanced material, color cuing, pretesting and modifying learning presentation based on performance) ...learners’ points of need. The point of need concept is focused both on the accessibility of information to support the learning process as well as

  14. Damage Processes in a Quasi-Isotropic Composite Short Beam Under Three- Point Loading

    DTIC Science & Technology

    1992-01-01

    American Society for Testing and Materials, 1916 Race Street, Philadelphia, PA 19103 12a. DISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE...three- point bend test Is investigated for a composite with a quasi-isotropic layup. Failue is found to Initiate iri a region near the point of...Composites Technology & Research, Winter 1991 Copyright American Society for Testing and Materials, 1916 Race Street, Philadelphia, PA 19103 REFERENCE

  15. Bottlenecks and Waiting Points in Nucleosynthesis in X-ray bursts and Novae

    NASA Astrophysics Data System (ADS)

    Smith, Michael S.; Sunayama, Tomomi; Hix, W. Raphael; Lingerfelt, Eric J.; Nesaraja, Caroline D.

    2010-08-01

    To better understand the energy generation and element synthesis occurring in novae and X-ray bursts, we give quantitative definitions to the concepts of ``bottlenecks'' and ``waiting points'' in the thermonuclear reaction flow. We use these criteria to search for bottlenecks and waiting points in post-processing element synthesis explosion simulations. We have incorporated these into the Computational Infrastructure for Nuclear Astrophysics, a suite of nuclear astrophysics codes available online at nucastrodata.org, so that anyone may perform custom searches for bottlenecks and waiting points.

  16. Bottlenecks and Waiting Points in Nucleosynthesis in X-ray bursts and Novae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Michael S.; Hix, W. Raphael; Nesaraja, Caroline D.

    2010-08-12

    To better understand the energy generation and element synthesis occurring in novae and X-ray bursts, we give quantitative definitions to the concepts of ''bottlenecks'' and ''waiting points'' in the thermonuclear reaction flow. We use these criteria to search for bottlenecks and waiting points in post-processing element synthesis explosion simulations. We have incorporated these into the Computational Infrastructure for Nuclear Astrophysics, a suite of nuclear astrophysics codes available online at nucastrodata.org, so that anyone may perform custom searches for bottlenecks and waiting points.

  17. A 3D Ginibre Point Field

    NASA Astrophysics Data System (ADS)

    Kargin, Vladislav

    2018-06-01

    We introduce a family of three-dimensional random point fields using the concept of the quaternion determinant. The kernel of each field is an n-dimensional orthogonal projection on a linear space of quaternionic polynomials. We find explicit formulas for the basis of the orthogonal quaternion polynomials and for the kernel of the projection. For number of particles n → ∞, we calculate the scaling limits of the point field in the bulk and at the center of coordinates. We compare our construction with the previously introduced Fermi-sphere point field process.

  18. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  19. Forest understory trees can be segmented accurately within sufficiently dense airborne laser scanning point clouds.

    PubMed

    Hamraz, Hamid; Contreras, Marco A; Zhang, Jun

    2017-07-28

    Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.

  20. Delivering Cognitive Processing Therapy in a Community Health Setting: The Influence of Latino Culture and Community Violence on Posttraumatic Cognitions

    PubMed Central

    Marques, Luana; Eustis, Elizabeth H.; Dixon, Louise; Valentine, Sarah E.; Borba, Christina; Simon, Naomi; Kaysen, Debra; Wiltsey-Stirman, Shannon

    2015-01-01

    Despite the applicability of Cognitive Processing Therapy (CPT) for Posttraumatic Stress Disorder (PTSD) to addressing sequelae of a range of traumatic events, few studies have evaluated whether the treatment itself is applicable across diverse populations. The present study examined differences and similarities amongst non-Latino, Latino Spanish-speaking, and Latino English-speaking clients in rigid beliefs – or “stuck points” – associated with PTSD symptoms in a sample of community mental health clients. We utilized the procedures of content analysis to analyze stuck point logs and impact statements of 29 participants enrolled in a larger implementation trial for CPT. Findings indicated that the content of stuck points was similar across Latino and non-Latino clients, although fewer total stuck points were identified for Latino clients compared to non-Latino clients. Given that identification of stuck points is central to implementing CPT, difficulty identifying stuck points could pose significant challenges for implementing CPT among Latino clients and warrants further examination. Thematic analysis of impact statements revealed the importance of family, religion, and the urban context (e.g., poverty, violence exposure) in understanding how clients organize beliefs and emotions associated with trauma. Clinical recommendations for implementing CPT in community settings and the identification of stuck points are provided. PMID:25961865

Top