Sample records for regularized centroid transform

  1. Instantaneous Frequency Attribute Comparison

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Ben Horin, Y.

    2013-12-01

    The instantaneous seismic data attribute provides a different means of seismic interpretation, for all types of seismic data. It first came to the fore in exploration seismology in the classic paper of Taner et al (1979), entitled " Complex seismic trace analysis". Subsequently a vast literature has been accumulated on the subject, which has been given an excellent review by Barnes (1992). In this research we will compare two different methods of computation of the instantaneous frequency. The first method is based on the original idea of Taner et al (1979) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method is based on the computation of the power centroid of the time-frequency spectrum, obtained using either the Gabor Transform as computed by Margrave et al (2011) or the Stockwell Transform as described by Stockwell et al (1996). We will apply both methods to exploration seismic data and the DPRK events recorded in 2006 and 2013. In applying the classical analytic signal technique, which is known to be unstable, due to the division of the square of the envelope, we will incorporate the stabilization and smoothing method proposed in the two paper of Fomel (2007). This method employs linear inverse theory regularization coupled with the application of an appropriate data smoother. The centroid method application is straightforward and is based on the very complete theoretical analysis provided in elegant fashion by Cohen (1995). While the results of the two methods are very similar, noticeable differences are seen at the data edges. This is most likely due to the edge effects of the smoothing operator in the Fomel method, which is more computationally intensive, when an optimal search of the regularization parameter is done. An advantage of the centroid method is the intrinsic smoothing of the data, which is inherent in the sliding window application used in all Short-Time Fourier Transform methods. The Fomel technique has a larger CPU run-time, resulting from the necessary matrix inversion. Barnes, Arthur E. "The calculation of instantaneous frequency and instantaneous bandwidth.", Geophysics, 57.11 (1992): 1520-1524. Fomel, Sergey. "Local seismic attributes.", Geophysics, 72.3 (2007): A29-A33. Fomel, Sergey. "Shaping regularization in geophysical-estimation problems." , Geophysics, 72.2 (2007): R29-R36. Stockwell, Robert Glenn, Lalu Mansinha, and R. P. Lowe. "Localization of the complex spectrum: the S transform."Signal Processing, IEEE Transactions on, 44.4 (1996): 998-1001. Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. "Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063. Cohen, Leon. "Time frequency analysis theory and applications."USA: Prentice Hall, (1995). Margrave, Gary F., Michael P. Lamoureux, and David C. Henley. "Gabor deconvolution: Estimating reflectivity by nonstationary deconvolution of seismic data." Geophysics, 76.3 (2011): W15-W30.

  2. Regularization of Instantaneous Frequency Attribute Computations

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  3. An actuator extension transformation for a motion simulator and an inverse transformation applying Newton-Raphson's method

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1972-01-01

    A set of equations which transform position and angular orientation of the centroid of the payload platform of a six-degree-of-freedom motion simulator into extensions of the simulator's actuators has been derived and is based on a geometrical representation of the system. An iterative scheme, Newton-Raphson's method, has been successfully used in a real time environment in the calculation of the position and angular orientation of the centroid of the payload platform when the magnitude of the actuator extensions is known. Sufficient accuracy is obtained by using only one Newton-Raphson iteration per integration step of the real time environment.

  4. Magic of Centroids

    ERIC Educational Resources Information Center

    Ferrarello, Daniela; Mammana, Maria Flavia; Pennisi, Mario

    2018-01-01

    In this paper, we show some properties of centroids of geometric figures, such as triangles, quadrilaterals and tetrahedra. In particular, we will prove the properties by means of geometric transformations and by introducing extensions of triangles and quadrilaterals, i.e. by adding one, two or three new vertices to the figure. The study of these…

  5. Inversion exercises inspired by mechanics

    NASA Astrophysics Data System (ADS)

    Groetsch, C. W.

    2016-02-01

    An elementary calculus transform, inspired by the centroid and gyration radius, is introduced as a prelude to the study of more advanced transforms. Analysis of the transform, including its inversion, makes use of several key concepts from basic calculus and exercises in the application and inversion of the transform provide practice in the use of technology in calculus.

  6. Application of clustering methods: Regularized Markov clustering (R-MCL) for analyzing dengue virus similarity

    NASA Astrophysics Data System (ADS)

    Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.

    2017-07-01

    Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.

  7. Magic of centroids

    NASA Astrophysics Data System (ADS)

    Ferrarello, Daniela; Mammana, Maria Flavia; Pennisi, Mario

    2018-05-01

    In this paper, we show some properties of centroids of geometric figures, such as triangles, quadrilaterals and tetrahedra. In particular, we will prove the properties by means of geometric transformations and by introducing extensions of triangles and quadrilaterals, i.e. by adding one, two or three new vertices to the figure. The study of these properties can be used, with profit, in a classroom activity supported by a dynamic geometry system.

  8. Observational Evidence for the Effect of Amplification Bias in Gravitational Microlensing Experiments

    NASA Astrophysics Data System (ADS)

    Han, Cheongho; Jeong, Youngjin; Kim, Ho-Il

    1998-11-01

    Recently Alard, Mao, & Guibert and Alard proposed to detect the shift of a star's image centroid, δx, as a method to identify the lensed source among blended stars. Goldberg & Woźniak actually applied this method to the OGLE-1 database and found that seven of 15 events showed significant centroid shifts of δx >~ 0.2". The amount of centroid shift has been estimated theoretically by Goldberg; however, he treated the problem in general and did not apply it to a particular survey or field and therefore based his estimate on simple toy model luminosity functions (i.e., power laws). In this paper, we construct the expected distribution of δx for Galactic bulge events based on the precise stellar luminosity function observed by Holtzman et al. using the Hubble Space Telescope. Their luminosity function is complete up to MI ~ 9.0 (MV ~ 12), which corresponds to faint M-type stars. In our analysis we find that regular blending cannot produce a large fraction of events with measurable centroid shifts. By contrast, a significant fraction of events would have measurable centroid shifts if they are affected by amplification-bias blending. Therefore, the measurements of large centroid shifts for an important fraction of microlensing events of Goldberg & Woźniak confirm the prediction of Han & Alard that a large fraction of Galactic bulge events are affected by amplification-bias blending.

  9. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  10. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution.

    PubMed

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-07

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  11. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution

    NASA Astrophysics Data System (ADS)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-01

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  12. Homothetic Transformations and Geometric Loci: Properties of Triangles and Quadrilaterals

    ERIC Educational Resources Information Center

    Mammana, Maria Flavia

    2016-01-01

    In this paper, we use geometric transformations to find some interesting properties related with geometric loci. In particular, given a triangle or a cyclic quadrilateral, the locus generated by the centroid or by the orthocentre (for triangles) or by the anticentre (for cyclic quadrilaterals) when one vertex moves on the circumcircle of the…

  13. Transform fault earthquakes in the North Atlantic: Source mechanisms and depth of faulting

    NASA Technical Reports Server (NTRS)

    Bergman, Eric A.; Solomon, Sean C.

    1987-01-01

    The centroid depths and source mechanisms of 12 large earthquakes on transform faults of the northern Mid-Atlantic Ridge were determined from an inversion of long-period body waveforms. The earthquakes occurred on the Gibbs, Oceanographer, Hayes, Kane, 15 deg 20 min, and Vema transforms. The depth extent of faulting during each earthquake was estimated from the centroid depth and the fault width. The source mechanisms for all events in this study display the strike slip motion expected for transform fault earthquakes; slip vector azimuths agree to 2 to 3 deg of the local strike of the zone of active faulting. The only anomalies in mechanism were for two earthquakes near the western end of the Vema transform which occurred on significantly nonvertical fault planes. Secondary faulting, occurring either precursory to or near the end of the main episode of strike-slip rupture, was observed for 5 of the 12 earthquakes. For three events the secondary faulting was characterized by reverse motion on fault planes striking oblique to the trend of the transform. In all three cases, the site of secondary reverse faulting is near a compression jog in the current trace of the active transform fault zone. No evidence was found to support the conclusions of Engeln, Wiens, and Stein that oceanic transform faults in general are either hotter than expected from current thermal models or weaker than normal oceanic lithosphere.

  14. Two-dimensional shape recognition using oriented-polar representation

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Yu, Kuo-Kan; Hsu, Yung-Li

    1997-10-01

    To deal with such a problem as object recognition of position, scale, and rotation invariance (PSRI), we utilize some PSRI properties of images obtained from objects, for example, the centroid of the image. The corresponding position of the centroid to the boundary of the image is invariant in spite of rotation, scale, and translation of the image. To obtain the information of the image, we use the technique similar to Radon transform, called the oriented-polar representation of a 2D image. In this representation, two specific points, the centroid and the weighted mean point, are selected to form an initial ray, then the image is sampled with N angularly equispaced rays departing from the initial rays. Each ray contains a number of intersections and the distance information obtained from the centroid to the intersections. The shape recognition algorithm is based on the least total error of these two items of information. Together with a simple noise removal and a typical backpropagation neural network, this algorithm is simple, but the PSRI is achieved with a high recognition rate.

  15. Effects of emphasising opposition and cooperation on collective movement behaviour during football small-sided games.

    PubMed

    Gonçalves, B; Marcelino, R; Torres-Ronda, L; Torrents, C; Sampaio, J

    2016-07-01

    Optimizing collective behaviour helps to increase performance in mutual tasks. In team sports settings, the small-sided games (SSG) have been used as key context tools to stress out the players' awareness about their in-game required behaviours. Research has mostly described these behaviours when confronting teams have the same number of players, disregarding the frequent situations of low and high inequality. This study compared the players' positioning dynamics when manipulating the number of opponents and teammates during professional and amateur football SSG. The participants played 4v3, 4v5 and 4v7 games, where one team was confronted with low-superiority, low- and high-inferiority situations, and their opponents with low-, medium- and high-cooperation situations. Positional data were used to calculate effective playing space and distances from each player to team centroid, opponent team centroid and nearest opponent. Outcomes suggested that increasing the number of opponents in professional teams resulted in moderate/large decrease in approximate entropy (ApEn) values to both distance to team and opponent team centroid (i.e., the variables present higher regularity/predictability pattern). In low-cooperation game scenarios, the ApEn in amateurs' tactical variables presented a moderate/large increase. The professional teams presented an increase in the distance to nearest opponent with the increase of the cooperation level. Increasing the number of opponents was effective to overemphasise the need to use local information in the positioning decision-making process from professionals. Conversely, amateur still rely on external informational feedback. Increasing the cooperation promoted more regularity in spatial organisation in amateurs and emphasise their players' local perceptions.

  16. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  17. Iris recognition using image moments and k-means algorithm.

    PubMed

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  18. Iris Recognition Using Image Moments and k-Means Algorithm

    PubMed Central

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221

  19. 1-Bromo-1′-(diphenyl­thio­phosphor­yl)­ferrocene

    PubMed Central

    Štěpnička, Petr; Schulz, Jiří; Císařová, Ivana

    2009-01-01

    The title compound, [Fe(C5H4Br)(C17H14PS)], crystallizes with two practically undistiguishable mol­ecules in the asymmetric unit, which are related by a non-space-group inversion. The ferrocene-1,1′-diyl units exhibit a regular geometry with negligible tilting and balanced Fe–ring centroid distances, and with the attached substituents assuming conformations close to ideal synclinal eclipsed. PMID:21577736

  20. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    NASA Astrophysics Data System (ADS)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  2. Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.

    PubMed

    Stępień, Grzegorz

    2018-03-17

    The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.

  3. Vehicle speed detection based on gaussian mixture model using sequential of images

    NASA Astrophysics Data System (ADS)

    Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang

    2017-09-01

    Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.

  4. Automatic segmentation and centroid detection of skin sensors for lung interventions

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Xu, Sheng; Xue, Zhong; Wong, Stephen T.

    2012-02-01

    Electromagnetic (EM) tracking has been recognized as a valuable tool for locating the interventional devices in procedures such as lung and liver biopsy or ablation. The advantage of this technology is its real-time connection to the 3D volumetric roadmap, i.e. CT, of a patient's anatomy while the intervention is performed. EM-based guidance requires tracking of the tip of the interventional device, transforming the location of the device onto pre-operative CT images, and superimposing the device in the 3D images to assist physicians to complete the procedure more effectively. A key requirement of this data integration is to find automatically the mapping between EM and CT coordinate systems. Thus, skin fiducial sensors are attached to patients before acquiring the pre-operative CTs. Then, those sensors can be recognized in both CT and EM coordinate systems and used calculate the transformation matrix. In this paper, to enable the EM-based navigation workflow and reduce procedural preparation time, an automatic fiducial detection method is proposed to obtain the centroids of the sensors from the pre-operative CT. The approach has been applied to 13 rabbit datasets derived from an animal study and eight human images from an observation study. The numerical results show that it is a reliable and efficient method for use in EM-guided application.

  5. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  6. On static triplet structures in fluids with quantum behavior.

    PubMed

    Sesé, Luis M

    2018-03-14

    The problem of the equilibrium triplet structures in fluids with quantum behavior is discussed. Theoretical questions of interest to the real space structures are addressed by studying the three types of structures that can be determined via path integrals (instantaneous, centroid, and total thermalized-continuous linear response). The cases of liquid para-H 2 and liquid neon on their crystallization lines are examined with path-integral Monte Carlo simulations, the focus being on the instantaneous and the centroid triplet functions (equilateral and isosceles configurations). To analyze the results further, two standard closures, Kirkwood superposition and Jackson-Feenberg convolution, are utilized. In addition, some pilot calculations with path integrals and closures of the instantaneous triplet structure factor of liquid para-H 2 are also carried out for the equilateral components. Triplet structural regularities connected to the pair radial structures are identified, a remarkable usefulness of the closures employed is observed (e.g., triplet spatial functions for medium-long distances, triplet structure factors for medium k wave numbers), and physical insight into the role of pair correlations near quantum crystallization is gained.

  7. On static triplet structures in fluids with quantum behavior

    NASA Astrophysics Data System (ADS)

    Sesé, Luis M.

    2018-03-01

    The problem of the equilibrium triplet structures in fluids with quantum behavior is discussed. Theoretical questions of interest to the real space structures are addressed by studying the three types of structures that can be determined via path integrals (instantaneous, centroid, and total thermalized-continuous linear response). The cases of liquid para-H2 and liquid neon on their crystallization lines are examined with path-integral Monte Carlo simulations, the focus being on the instantaneous and the centroid triplet functions (equilateral and isosceles configurations). To analyze the results further, two standard closures, Kirkwood superposition and Jackson-Feenberg convolution, are utilized. In addition, some pilot calculations with path integrals and closures of the instantaneous triplet structure factor of liquid para-H2 are also carried out for the equilateral components. Triplet structural regularities connected to the pair radial structures are identified, a remarkable usefulness of the closures employed is observed (e.g., triplet spatial functions for medium-long distances, triplet structure factors for medium k wave numbers), and physical insight into the role of pair correlations near quantum crystallization is gained.

  8. Angles-centroids fitting calibration and the centroid algorithm applied to reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Zhao, Zhu; Hui, Mei; Xia, Zhengzheng; Dong, Liquan; Liu, Ming; Liu, Xiaohua; Kong, Lingqin; Zhao, Yuejin

    2017-02-01

    In this paper, we develop an angles-centroids fitting (ACF) system and the centroid algorithm to calibrate the reverse Hartmann test (RHT) with sufficient precision. The essence of ACF calibration is to establish the relationship between ray angles and detector coordinates. Centroids computation is used to find correspondences between the rays of datum marks and detector pixels. Here, the point spread function of RHT is classified as circle of confusion (CoC), and the fitting of a CoC spot with 2D Gaussian profile to identify the centroid forms the basis of the centroid algorithm. Theoretical and experimental results of centroids computation demonstrate that the Gaussian fitting method has a less centroid shift or the shift grows at a slower pace when the quality of the image is reduced. In ACF tests, the optical instrumental alignments reach an overall accuracy of 0.1 pixel with the application of laser spot centroids tracking program. Locating the crystal at different positions, the feasibility and accuracy of ACF calibration are further validated to 10-6-10-4 rad root-mean-square error of the calibrations differences.

  9. Discriminant analysis for fast multiclass data classification through regularized kernel function approximation.

    PubMed

    Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K

    2010-06-01

    In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.

  10. Nonlinear equations of motion for the elastic bending and torsion of twisted nonuniform rotor blades

    NASA Technical Reports Server (NTRS)

    Hodges, D. H.; Dowell, E. H.

    1974-01-01

    The equations of motion are developed by two complementary methods, Hamilton's principle and the Newtonian method. The resulting equations are valid to second order for long, straight, slender, homogeneous, isotropic beams undergoing moderate displacements. The ordering scheme is based on the restriction that squares of the bending slopes, the torsion deformation, and the chord/radius and thickness/radius ratios are negligible with respect to unity. All remaining nonlinear terms are retained. The equations are valid for beams with mass centroid axis and area centroid (tension) axis offsets from the elastic axis, nonuniform mass and stiffness section properties, variable pretwist, and a small precone angle. The strain-displacement relations are developed from an exact transformation between the deformed and undeformed coordinate systems. These nonlinear relations form an important contribution to the final equations. Several nonlinear structural and inertial terms in the final equations are identified that can substantially influence the aeroelastic stability and response of hingeless helicopter rotor blades.

  11. Computer-aided detection of clustered microcalcifications in multiscale bilateral filtering regularized reconstructed digital breast tomosynthesis volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samala, Ravi K., E-mail: rsamala@umich.edu; Chan, Heang-Ping; Lu, Yao

    Purpose: Develop a computer-aided detection (CADe) system for clustered microcalcifications in digital breast tomosynthesis (DBT) volume enhanced with multiscale bilateral filtering (MSBF) regularization. Methods: With Institutional Review Board approval and written informed consent, two-view DBT of 154 breasts, of which 116 had biopsy-proven microcalcification (MC) clusters and 38 were free of MCs, was imaged with a General Electric GEN2 prototype DBT system. The DBT volumes were reconstructed with MSBF-regularized simultaneous algebraic reconstruction technique (SART) that was designed to enhance MCs and reduce background noise while preserving the quality of other tissue structures. The contrast-to-noise ratio (CNR) of MCs was furthermore » improved with enhancement-modulated calcification response (EMCR) preprocessing, which combined multiscale Hessian response to enhance MCs by shape and bandpass filtering to remove the low-frequency structured background. MC candidates were then located in the EMCR volume using iterative thresholding and segmented by adaptive region growing. Two sets of potential MC objects, cluster centroid objects and MC seed objects, were generated and the CNR of each object was calculated. The number of candidates in each set was controlled based on the breast volume. Dynamic clustering around the centroid objects grouped the MC candidates to form clusters. Adaptive criteria were designed to reduce false positive (FP) clusters based on the size, CNR values and the number of MCs in the cluster, cluster shape, and cluster based maximum intensity projection. Free-response receiver operating characteristic (FROC) and jackknife alternative FROC (JAFROC) analyses were used to assess the performance and compare with that of a previous study. Results: Unpaired two-tailedt-test showed a significant increase (p < 0.0001) in the ratio of CNRs for MCs with and without MSBF regularization compared to similar ratios for FPs. For view-based detection, a sensitivity of 85% was achieved at an FP rate of 2.16 per DBT volume. For case-based detection, a sensitivity of 85% was achieved at an FP rate of 0.85 per DBT volume. JAFROC analysis showed a significant improvement in the performance of the current CADe system compared to that of our previous system (p = 0.003). Conclusions: MBSF regularized SART reconstruction enhances MCs. The enhancement in the signals, in combination with properly designed adaptive threshold criteria, effective MC feature analysis, and false positive reduction techniques, leads to a significant improvement in the detection of clustered MCs in DBT.« less

  12. Outburst of GX304-1 Monitored with INTEGRAL: Positive Correlation Between the Cyclotron Line Energy and Flux

    NASA Technical Reports Server (NTRS)

    Klochkov, D.; Doroshenko, V.; Santangelo, A.; Staubert, R.; Ferrigno, C.; Kretschmar, P.; Caballero, I.; Wilms, J.; Kreykenbohm, I.; Pottschmidt, I.; hide

    2012-01-01

    Context. X-ray spectra of many accreting pulsars exhibit significant variations as a function of flux and thus of mass accretion rate. In some of these pulsars, the centroid energy of the cyclotron line(s), which characterizes the magnetic field strength at the site of the X-ray emission, has been found to vary systematically with flux. Aims. GX304-1 is a recently established cyclotron line source with a line energy around 50 keV. Since 2009, the pulsar shows regular outbursts with the peak flux exceeding one Crab. We analyze the INTEGRAL observations of the source during its outburst in January-February 2012. Methods. The observations covered almost the entire outburst, allowing us to measure the source's broad-band X-my spectrum at different flux levels. We report on the variations in the spectral parameters with luminosity and focus on the variations in the cyclotron line. Results. The centroid energy of the line is found to be positively correlated with the luminosity. We interpret this result as a manifestation of the local sub-Eddington (sub-critical) accretion regime operating in the source.

  13. The Implications of Strike-Slip Earthquake Source Properties on the Transform Boundary Development Process

    NASA Astrophysics Data System (ADS)

    Neely, J. S.; Huang, Y.; Furlong, K.

    2017-12-01

    Subduction-Transform Edge Propagator (STEP) faults, produced by the tearing of a subducting plate, allow us to study the development of a transform plate boundary and improve our understanding of both long-term geologic processes and short-term seismic hazards. The 280 km long San Cristobal Trough (SCT), formed by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon and Vanuatu subduction zones, shows along-strike variations in earthquake behaviors. The segment of the SCT closest to the tear rarely hosts earthquakes > Mw 6, whereas the SCT sections more than 80 - 100 km from the tear experience Mw7 earthquakes with repeated rupture along the same segments. To understand the effect of cumulative displacement on SCT seismicity, we analyze b-values, centroid-time delays and corner frequencies of the SCT earthquakes. We use the spectral ratio method based on Empirical Green's Functions (eGfs) to isolate source effects from propagation and site effects. We find high b-values along the SCT closest to the tear with values decreasing with distance before finally increasing again towards the far end of the SCT. Centroid time-delays for the Mw 7 strike-slip earthquakes increase with distance from the tear, but corner frequency estimates for a recent sequence of Mw 7 earthquakes are approximately equal, indicating a growing complexity in earthquake behavior with distance from the tear due to a displacement-driven transform boundary development process (see figure). The increasing complexity possibly stems from the earthquakes along the eastern SCT rupturing through multiple asperities resulting in multiple moment pulses. If not for the bounding Vanuatu subduction zone at the far end of the SCT, the eastern SCT section, which has experienced the most displacement, might be capable of hosting larger earthquakes. When assessing the seismic hazard of other STEP faults, cumulative fault displacement should be considered a key input in determining potential earthquake size.

  14. A centroid model of species distribution with applications to the Carolina wren Thryothorus ludovicianus and house finch Haemorhous mexicanus in the United States

    USGS Publications Warehouse

    Huang, Qiongyu; Sauer, John R.; Swatantran, Anu; Dubayah, Ralph

    2016-01-01

    Drastic shifts in species distributions are a cause of concern for ecologists. Such shifts pose great threat to biodiversity especially under unprecedented anthropogenic and natural disturbances. Many studies have documented recent shifts in species distributions. However, most of these studies are limited to regional scales, and do not consider the abundance structure within species ranges. Developing methods to detect systematic changes in species distributions over their full ranges is critical for understanding the impact of changing environments and for successful conservation planning. Here, we demonstrate a centroid model for range-wide analysis of distribution shifts using the North American Breeding Bird Survey. The centroid model is based on a hierarchical Bayesian framework which models population change within physiographic strata while accounting for several factors affecting species detectability. Yearly abundance-weighted range centroids are estimated. As case studies, we derive annual centroids for the Carolina wren and house finch in their ranges in the U.S. We further evaluate the first-difference correlation between species’ centroid movement and changes in winter severity, total population abundance. We also examined associations of change in centroids from sub-ranges. Change in full-range centroid movements of Carolina wren significantly correlate with snow cover days (r = −0.58). For both species, the full-range centroid shifts also have strong correlation with total abundance (r = 0.65, and 0.51 respectively). The movements of the full-range centroids of the two species are correlated strongly (up to r = 0.76) with that of the sub-ranges with more drastic population changes. Our study demonstrates the usefulness of centroids for analyzing distribution changes in a two-dimensional spatial context. Particularly it highlights applications that associate the centroid with factors such as environmental stressors, population characteristics, and progression of invasive species. Routine monitoring of changes in centroid will provide useful insights into long-term avian responses to environmental changes.

  15. Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, S. M.

    2018-03-01

    Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.

  16. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    NASA Astrophysics Data System (ADS)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  17. Aeroelastically coupled blades for vertical axis wind turbines

    DOEpatents

    Paquette, Joshua; Barone, Matthew F.

    2016-02-23

    Various technologies described herein pertain to a vertical axis wind turbine blade configured to rotate about a rotation axis. The vertical axis wind turbine blade includes at least an attachment segment, a rear swept segment, and optionally, a forward swept segment. The attachment segment is contiguous with the forward swept segment, and the forward swept segment is contiguous with the rear swept segment. The attachment segment includes a first portion of a centroid axis, the forward swept segment includes a second portion of the centroid axis, and the rear swept segment includes a third portion of the centroid axis. The second portion of the centroid axis is angularly displaced ahead of the first portion of the centroid axis and the third portion of the centroid axis is angularly displaced behind the first portion of the centroid axis in the direction of rotation about the rotation axis.

  18. A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1989-01-01

    Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.

  19. Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.

    PubMed

    Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal

    2016-11-01

    Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

  20. CCD centroiding analysis for Nano-JASMINE observation data

    NASA Astrophysics Data System (ADS)

    Niwa, Yoshito; Yano, Taihei; Araki, Hiroshi; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki; Tazawa, Seiichi; Hanada, Hideo

    2010-07-01

    Nano-JASMINE is a very small satellite mission for global space astrometry with milli-arcsecond accuracy, which will be launched in 2011. In this mission, centroids of stars in CCD image frames are estimated with sub-pixel accuracy. In order to realize such a high precision centroiding an algorithm utilizing a least square method is employed. One of the advantages is that centroids can be calculated without explicit assumption of the point spread functions of stars. CCD centroiding experiment has been performed to investigate whether this data analysis is available, and centroids of artificial star images on a CCD are determined with a precision of less than 0.001 pixel. This result indicates parallaxes of stars within 300 pc from Sun can be observed in Nano-JASMINE.

  1. Computerized tomography with total variation and with shearlets

    NASA Astrophysics Data System (ADS)

    Garduño, Edgar; Herman, Gabor T.

    2017-04-01

    To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures a lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing function, a particular recent choice for this is the shearlet transform. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small ℓ 1-norm are not more efficacious than reconstructions that have a small TV value. Our experiments for making such comparisons use the recently-developed superiorization methodology for both regularizing functions. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that, according to the primary criterion are as good as those produced by the original algorithm, but in addition are superior to them according to a secondary (regularizing) criterion. The method presented for superiorization involving the ℓ 1-norm of the shearlet transform is novel and is quite general: It can be used for any regularizing function that is defined as the ℓ 1-norm of a transform specified by the application of a matrix. Because in the previous literature the split Bregman algorithm is used for similar purposes, a section is included comparing the results of the superiorization algorithm with the split Bregman algorithm.

  2. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  3. Ranked centroid projection: a data visualization approach with self-organizing maps.

    PubMed

    Yen, G G; Wu, Z

    2008-02-01

    The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.

  4. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  5. Shrinkage simplex-centroid designs for a quadratic mixture model

    NASA Astrophysics Data System (ADS)

    Hasan, Taha; Ali, Sajid; Ahmed, Munir

    2018-03-01

    A simplex-centroid design for q mixture components comprises of all possible subsets of the q components, present in equal proportions. The design does not contain full mixture blends except the overall centroid. In real-life situations, all mixture blends comprise of at least a minimum proportion of each component. Here, we introduce simplex-centroid designs which contain complete blends but with some loss in D-efficiency and stability in G-efficiency. We call such designs as shrinkage simplex-centroid designs. Furthermore, we use the proposed designs to generate component-amount designs by their projection.

  6. Centroid tracker and aimpoint selection

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, Ronda; Sujata, K. V.; Venkateswara Rao, B.

    1992-11-01

    Autonomous fire and forget weapons have gained importance to achieve accurate first pass kill by hitting the target at an appropriate aim point. Centroid of the image presented by a target in the field of view (FOV) of a sensor is generally accepted as the aimpoint for these weapons. Centroid trackers are applicable only when the target image is of significant size in the FOV of the sensor but does not overflow the FOV. But as the range between the sensor and the target decreases the image of the target will grow and finally overflow the FOV at close ranges and the centroid point on the target will keep on changing which is not desirable. And also centroid need not be the most desired/vulnerable point on the target. For hardened targets like tanks, proper aimpoint selection and guidance up to almost zero range is essential to achieve maximum kill probability. This paper presents a centroid tracker realization. As centroid offers a stable tracking point, it can be used as a reference to select the proper aimpoint. The centroid and the desired aimpoint are simultaneously tracked to avoid jamming by flares and also to take care of the problems arising due to image overflow. Thresholding of gray level image to binary image is a crucial step in centroid tracker. Different thresholding algorithms are discussed and a suitable algorithm is chosen. The real-time hardware implementation of centroid tracker with a suitable thresholding technique is presented including the interfacing to a multimode tracker for autonomous target tracking and aimpoint selection. The hardware uses very high speed arithmetic and programmable logic devices to meet the speed requirement and a microprocessor based subsystem for the system control. The tracker has been evaluated in a field environment.

  7. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor.

    PubMed

    Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping

    2009-11-10

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  8. Centroid of a Polygon--Three Views.

    ERIC Educational Resources Information Center

    Shilgalis, Thomas W.; Benson, Carol T.

    2001-01-01

    Investigates the idea of the center of mass of a polygon and illustrates centroids of polygons. Connects physics, mathematics, and technology to produces results that serve to generalize the notion of centroid to polygons other than triangles. (KHR)

  9. Twistor interpretation of slice regular functions

    NASA Astrophysics Data System (ADS)

    Altavilla, Amedeo

    2018-01-01

    Given a slice regular function f : Ω ⊂ H → H, with Ω ∩ R ≠ ∅, it is possible to lift it to surfaces in the twistor space CP3 of S4 ≃ H ∪ { ∞ } (see Gentili et al., 2014). In this paper we show that the same result is true if one removes the hypothesis Ω ∩ R ≠ ∅ on the domain of the function f. Moreover we find that if a surface S ⊂CP3 contains the image of the twistor lift of a slice regular function, then S has to be ruled by lines. Starting from these results we find all the projective classes of algebraic surfaces up to degree 3 in CP3 that contain the lift of a slice regular function. In addition we extend and further explore the so-called twistor transform, that is a curve in Gr2(C4) which, given a slice regular function, returns the arrangement of lines whose lift carries on. With the explicit expression of the twistor lift and of the twistor transform of a slice regular function we exhibit the set of slice regular functions whose twistor transform describes a rational line inside Gr2(C4) , showing the role of slice regular functions not defined on R. At the end we study the twistor lift of a particular slice regular function not defined over the reals. This example shows the effectiveness of our approach and opens some questions.

  10. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  11. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  12. Comparison of performance of some common Hartmann-Shack centroid estimation methods

    NASA Astrophysics Data System (ADS)

    Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.

    2016-03-01

    The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.

  13. Thin-plate spline analysis of allometry and sexual dimorphism in the human craniofacial complex.

    PubMed

    Rosas, Antonio; Bastir, Markus

    2002-03-01

    The relationship between allometry and sexual dimorphism in the human craniofacial complex was analyzed using geometric morphometric methods. Thin-plate splines (TPS) analysis has been applied to investigate the lateral profile of complete adult skulls of known sex. Twenty-nine three-dimensional (3D) craniofacial and mandibular landmark coordinates were recorded from a sample of 52 adult females and 52 adult males of known age and sex. No difference in the influence of size on shape was detected between sexes. Both size and sex had significant influences on shape. As expected, the influence of centroid size on shape (allometry) revealed a shift in the proportions of the neurocranium and the viscerocranium, with a marked allometric variation of the lower face. Adjusted for centroid size, males presented a relatively larger size of the nasopharyngeal space than females. A mean-male TPS transformation revealed a larger piriform aperture, achieved by an increase of the angulation of the nasal bones and a downward rotation of the anterior nasal floor. Male pharynx expansion was also reflected by larger choanae and a more posteriorly inclined basilar part of the occipital clivus. Male muscle attachment sites appeared more pronounced. In contrast, the mean-female TPS transformation was characterized by a relatively small nasal aperture. The occipital clivus inclined anteriorly, and muscle insertion areas became smoothed. Besides these variations, both maxillary and mandibular alveolar regions became prognathic. The sex-specific TPS deformation patterns are hypothesized to be associated with sexual differences in body composition and energetic requirements. Copyright 2002 Wiley-Liss, Inc.

  14. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  15. The extended Fourier transform for 2D spectral estimation.

    PubMed

    Armstrong, G S; Mandelshtam, V A

    2001-11-01

    We present a linear algebraic method, named the eXtended Fourier Transform (XFT), for spectral estimation from truncated time signals. The method is a hybrid of the discrete Fourier transform (DFT) and the regularized resolvent transform (RRT) (J. Chen et al., J. Magn. Reson. 147, 129-137 (2000)). Namely, it estimates the remainder of a finite DFT by RRT. The RRT estimation corresponds to solution of an ill-conditioned problem, which requires regularization. The regularization depends on a parameter, q, that essentially controls the resolution. By varying q from 0 to infinity one can "tune" the spectrum between a high-resolution spectral estimate and the finite DFT. The optimal value of q is chosen according to how well the data fits the form of a sum of complex sinusoids and, in particular, the signal-to-noise ratio. Both 1D and 2D XFT are presented with applications to experimental NMR signals. Copyright 2001 Academic Press.

  16. Challenges in projecting clustering results across gene expression-profiling datasets.

    PubMed

    Lusa, Lara; McShane, Lisa M; Reid, James F; De Cecco, Loris; Ambrogi, Federico; Biganzoli, Elia; Gariboldi, Manuela; Pierotti, Marco A

    2007-11-21

    Gene expression microarray studies for several types of cancer have been reported to identify previously unknown subtypes of tumors. For breast cancer, a molecular classification consisting of five subtypes based on gene expression microarray data has been proposed. These subtypes have been reported to exist across several breast cancer microarray studies, and they have demonstrated some association with clinical outcome. A classification rule based on the method of centroids has been proposed for identifying the subtypes in new collections of breast cancer samples; the method is based on the similarity of the new profiles to the mean expression profile of the previously identified subtypes. Previously identified centroids of five breast cancer subtypes were used to assign 99 breast cancer samples, including a subset of 65 estrogen receptor-positive (ER+) samples, to five breast cancer subtypes based on microarray data for the samples. The effect of mean centering the genes (i.e., transforming the expression of each gene so that its mean expression is equal to 0) on subtype assignment by method of centroids was assessed. Further studies of the effect of mean centering and of class prevalence in the test set on the accuracy of method of centroids classifications of ER status were carried out using training and test sets for which ER status had been independently determined by ligand-binding assay and for which the proportion of ER+ and ER- samples were systematically varied. When all 99 samples were considered, mean centering before application of the method of centroids appeared to be helpful for correctly assigning samples to subtypes, as evidenced by the expression of genes that had previously been used as markers to identify the subtypes. However, when only the 65 ER+ samples were considered for classification, many samples appeared to be misclassified, as evidenced by an unexpected distribution of ER+ samples among the resultant subtypes. When genes were mean centered before classification of samples for ER status, the accuracy of the ER subgroup assignments was highly dependent on the proportion of ER+ samples in the test set; this effect of subtype prevalence was not seen when gene expression data were not mean centered. Simple corrections such as mean centering of genes aimed at microarray platform or batch effect correction can have undesirable consequences because patient population effects can easily be confused with these assay-related effects. Careful thought should be given to the comparability of the patient populations before attempting to force data comparability for purposes of assigning subtypes to independent subjects.

  17. Star sub-pixel centroid calculation based on multi-step minimum energy difference method

    NASA Astrophysics Data System (ADS)

    Wang, Duo; Han, YanLi; Sun, Tengfei

    2013-09-01

    The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.

  18. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  19. Algorithms used in the Airborne Lidar Processing System (ALPS)

    USGS Publications Warehouse

    Nagle, David B.; Wright, C. Wayne

    2016-05-23

    The Airborne Lidar Processing System (ALPS) analyzes Experimental Advanced Airborne Research Lidar (EAARL) data—digitized laser-return waveforms, position, and attitude data—to derive point clouds of target surfaces. A full-waveform airborne lidar system, the EAARL seamlessly and simultaneously collects mixed environment data, including submerged, sub-aerial bare earth, and vegetation-covered topographies.ALPS uses three waveform target-detection algorithms to determine target positions within a given waveform: centroid analysis, leading edge detection, and bottom detection using water-column backscatter modeling. The centroid analysis algorithm detects opaque hard surfaces. The leading edge algorithm detects topography beneath vegetation and shallow, submerged topography. The bottom detection algorithm uses water-column backscatter modeling for deeper submerged topography in turbid water.The report describes slant range calculations and explains how ALPS uses laser range and orientation measurements to project measurement points into the Universal Transverse Mercator coordinate system. Parameters used for coordinate transformations in ALPS are described, as are Interactive Data Language-based methods for gridding EAARL point cloud data to derive digital elevation models. Noise reduction in point clouds through use of a random consensus filter is explained, and detailed pseudocode, mathematical equations, and Yorick source code accompany the report.

  20. Combined spectroscopic imaging and chemometric approach for automatically partitioning tissue types in human prostate tissue biopsies

    NASA Astrophysics Data System (ADS)

    Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil

    2001-07-01

    We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.

  1. Discovering Structural Regularity in 3D Geometry

    PubMed Central

    Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.

    2010-01-01

    We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292

  2. Reliability of an experimental method to analyse the impact point on a golf ball during putting.

    PubMed

    Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn

    2015-06-01

    This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.

  3. Correcting the beam centroid motion in an induction accelerator and reducing the beam breakup instability

    NASA Astrophysics Data System (ADS)

    Coleman, J. E.; Ekdahl, C. A.; Moir, D. C.; Sullivan, G. W.; Crawford, M. T.

    2014-09-01

    Axial beam centroid and beam breakup (BBU) measurements were conducted on an 80 ns FWHM, intense relativistic electron bunch with an injected energy of 3.8 MV and current of 2.9 kA. The intense relativistic electron bunch is accelerated and transported through a nested solenoid and ferrite induction core lattice consisting of 64 elements, exiting the accelerator with a nominal energy of 19.8 MeV. The principal objective of these experiments is to quantify the coupling of the beam centroid motion to the BBU instability and validate the theory of this coupling for the first time. Time resolved centroid measurements indicate a reduction in the BBU amplitude, ⟨ξ⟩, of 19% and a reduction in the BBU growth rate (Γ) of 4% by reducing beam centroid misalignments ˜50% throughout the accelerator. An investigation into the contribution of the misaligned elements is made. An alignment algorithm is presented in addition to a qualitative comparison of experimental and calculated results which include axial beam centroid oscillations, BBU amplitude, and growth with different dipole steering.

  4. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    NASA Astrophysics Data System (ADS)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  5. A focal plane metrology system and PSF centroiding experiment

    NASA Astrophysics Data System (ADS)

    Li, Haitao; Li, Baoquan; Cao, Yang; Li, Ligang

    2016-10-01

    In this paper, we present an overview of a detector array equipment metrology testbed and a micro-pixel centroiding experiment currently under development at the National Space Science Center, Chinese Academy of Sciences. We discuss on-going development efforts aimed at calibrating the intra-/inter-pixel quantum efficiency and pixel positions for scientific grade CMOS detector, and review significant progress in achieving higher precision differential centroiding for pseudo star images in large area back-illuminated CMOS detector. Without calibration of pixel positions and intrapixel response, we have demonstrated that the standard deviation of differential centroiding is below 2.0e-3 pixels.

  6. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2003-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  7. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2004-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  8. Laser transit anemometer software development program

    NASA Technical Reports Server (NTRS)

    Abbiss, John B.

    1989-01-01

    Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.

  9. Accelerating Large Data Analysis By Exploiting Regularities

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  10. Multifractal surrogate-data generation algorithm that preserves pointwise Hölder regularity structure, with initial applications to turbulence

    NASA Astrophysics Data System (ADS)

    Keylock, C. J.

    2017-03-01

    An algorithm is described that can generate random variants of a time series while preserving the probability distribution of original values and the pointwise Hölder regularity. Thus, it preserves the multifractal properties of the data. Our algorithm is similar in principle to well-known algorithms based on the preservation of the Fourier amplitude spectrum and original values of a time series. However, it is underpinned by a dual-tree complex wavelet transform rather than a Fourier transform. Our method, which we term the iterated amplitude adjusted wavelet transform can be used to generate bootstrapped versions of multifractal data, and because it preserves the pointwise Hölder regularity but not the local Hölder regularity, it can be used to test hypotheses concerning the presence of oscillating singularities in a time series, an important feature of turbulence and econophysics data. Because the locations of the data values are randomized with respect to the multifractal structure, hypotheses about their mutual coupling can be tested, which is important for the velocity-intermittency structure of turbulence and self-regulating processes.

  11. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  12. Comparison of estimates of hardwood bole volume using importance sampling, the centroid method, and some taper equations

    Treesearch

    Harry V., Jr. Wiant; Michael L. Spangler; John E. Baumgras

    2002-01-01

    Various taper systems and the centroid method were compared to unbiased volume estimates made by importance sampling for 720 hardwood trees selected throughout the state of West Virginia. Only the centroid method consistently gave volumes estimates that did not differ significantly from those made by importance sampling, although some taper equations did well for most...

  13. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less

  14. PET/CT alignment calibration with a non-radioactive phantom and the intrinsic 176Lu radiation of PET detector

    NASA Astrophysics Data System (ADS)

    Wei, Qingyang; Ma, Tianyu; Wang, Shi; Liu, Yaqiang; Gu, Yu; Dai, Tiantian

    2016-11-01

    Positron emission tomography/computed tomography (PET/CT) is an important tool for clinical studies and pre-clinical researches which provides both functional and anatomical images. To achieve high quality co-registered PET/CT images, alignment calibration of PET and CT scanner is a critical procedure. The existing methods reported use positron source phantoms imaged both by PET and CT scanner and then derive the transformation matrix from the reconstructed images of the two modalities. In this paper, a novel PET/CT alignment calibration method with a non-radioactive phantom and the intrinsic 176Lu radiation of the PET detector was developed. Firstly, a multi-tungsten-alloy-sphere phantom without positron source was designed and imaged by CT and the PET scanner using intrinsic 176Lu radiation included in LYSO. Secondly, the centroids of the spheres were derived and matched by an automatic program. Lastly, the rotation matrix and the translation vector were calculated by least-square fitting of the centroid data. The proposed method was employed in an animal PET/CT system (InliView-3000) developed in our lab. Experimental results showed that the proposed method achieves high accuracy and is feasible to replace the conventional positron source based methods.

  15. Doppler centroid estimation ambiguity for synthetic aperture radars

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1989-01-01

    A technique for estimation of the Doppler centroid of an SAR in the presence of large uncertainty in antenna boresight pointing is described. Also investigated is the image degradation resulting from data processing that uses an ambiguous centroid. Two approaches for resolving ambiguities in Doppler centroid estimation (DCE) are presented: the range cross-correlation technique and the multiple-PRF (pulse repetition frequency) technique. Because other design factors control the PRF selection for SAR, a generalized algorithm is derived for PRFs not containing a common divisor. An example using the SIR-C parameters illustrates that this algorithm is capable of resolving the C-band DCE ambiguities for antenna pointing uncertainties of about 2-3 deg.

  16. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    NASA Astrophysics Data System (ADS)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  17. Characterization of trabecular bone using the backscattered spectral centroid shift.

    PubMed

    Wear, Keith A

    2003-04-01

    Ultrasonic attenuation in bone in vivo is generally measured using a through-transmission method at the calcaneus. Although attenuation in calcaneus has been demonstrated to be a useful predictor for osteoporotic fracture risk, measurements at other clinically important sites, such as hip and spine, could potentially contain additional useful diagnostic information. Through-transmission measurements may not be feasible at these sites due to complex bone shapes and the increased amount of intervening soft tissue. Centroid shift from the backscattered signal is an index of attenuation slope and has been used previously to characterize soft tissues. In this paper, centroid shift from signals backscattered from 30 trabecular bone samples in vitro were measured. Attenuation slope also was measured using a through-transmission method. The correlation coefficient between centroid shift and attenuation slope was -0.71. The 95% confidence interval was (-0.86, -0.47). These results suggest that the backscattered spectral centroid shift may contain useful diagnostic information potentially applicable to hip and spine.

  18. Measurements of ultrasonic backscattered spectral centroid shift from spine in vivo: methodology and preliminary results.

    PubMed

    Garra, Brian S; Locher, Melanie; Felker, Steven; Wear, Keith A

    2009-01-01

    Ultrasonic backscatter measurements from vertebral bodies (L3 and L4) in nine women were performed using a clinical ultrasonic imaging system. Measurements were made through the abdomen. The location of a vertebra was identified from the bright specular reflection from the vertebral anterior surface. Backscattered signals were gated to isolate signal emanating from the cancellous interiors of vertebrae. The spectral centroid shift of the backscattered signal, which has previously been shown to correlate highly with bone mineral density (BMD) in human calcaneus in vitro, was measured. BMD was also measured in the nine subjects' vertebrae using a clinical bone densitometer. The correlation coefficient between centroid shift and BMD was r = -0.61. The slope of the linear fit was -160 kHz / (g/cm(2)). The negative slope was expected because the attenuation coefficient (and therefore magnitude of the centroid downshift) is known from previous studies to increase with BMD. The centroid shift may be a useful parameter for characterizing bone in vivo.

  19. Local Regularity Analysis with Wavelet Transform in Gear Tooth Failure Detection

    NASA Astrophysics Data System (ADS)

    Nissilä, Juhani

    2017-09-01

    Diagnosing gear tooth and bearing failures in industrial power transition situations has been studied a lot but challenges still remain. This study aims to look at the problem from a more theoretical perspective. Our goal is to find out if the local regularity i.e. smoothness of the measured signal can be estimated from the vibrations of epicyclic gearboxes and if the regularity can be linked to the meshing events of the gear teeth. Previously it has been shown that the decreasing local regularity of the measured acceleration signals can reveal the inner race faults in slowly rotating bearings. The local regularity is estimated from the modulus maxima ridges of the signal's wavelet transform. In this study, the measurements come from the epicyclic gearboxes of the Kelukoski water power station (WPS). The very stable rotational speed of the WPS makes it possible to deduce that the gear mesh frequencies of the WPS and a frequency related to the rotation of the turbine blades are the most significant components in the spectra of the estimated local regularity signals.

  20. Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization

    NASA Astrophysics Data System (ADS)

    Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing

    2015-05-01

    Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.

  1. Time delay and magnification centroid due to gravitational lensing by black holes and naked singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virbhadra, K. S.; Keeton, C. R.; Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854

    We model the massive dark object at the center of the Galaxy as a Schwarzschild black hole as well as Janis-Newman-Winicour naked singularities, characterized by the mass and scalar charge parameters, and study gravitational lensing (particularly time delay, magnification centroid, and total magnification) by them. We find that the lensing features are qualitatively similar (though quantitatively different) for Schwarzschild black holes, weakly naked, and marginally strongly naked singularities. However, the lensing characteristics of strongly naked singularities are qualitatively very different from those due to Schwarzschild black holes. The images produced by Schwarzschild black hole lenses and weakly naked and marginallymore » strongly naked singularity lenses always have positive time delays. On the other hand, strongly naked singularity lenses can give rise to images with positive, zero, or negative time delays. In particular, for a large angular source position the direct image (the outermost image on the same side as the source) due to strongly naked singularity lensing always has a negative time delay. We also found that the scalar field decreases the time delay and increases the total magnification of images; this result could have important implications for cosmology. As the Janis-Newman-Winicour metric also describes the exterior gravitational field of a scalar star, naked singularities as well as scalar star lenses, if these exist in nature, will serve as more efficient cosmic telescopes than regular gravitational lenses.« less

  2. Differential gene expression detection and sample classification using penalized linear regression models.

    PubMed

    Wu, Baolin

    2006-02-15

    Differential gene expression detection and sample classification using microarray data have received much research interest recently. Owing to the large number of genes p and small number of samples n (p > n), microarray data analysis poses big challenges for statistical analysis. An obvious problem owing to the 'large p small n' is over-fitting. Just by chance, we are likely to find some non-differentially expressed genes that can classify the samples very well. The idea of shrinkage is to regularize the model parameters to reduce the effects of noise and produce reliable inferences. Shrinkage has been successfully applied in the microarray data analysis. The SAM statistics proposed by Tusher et al. and the 'nearest shrunken centroid' proposed by Tibshirani et al. are ad hoc shrinkage methods. Both methods are simple, intuitive and prove to be useful in empirical studies. Recently Wu proposed the penalized t/F-statistics with shrinkage by formally using the (1) penalized linear regression models for two-class microarray data, showing good performance. In this paper we systematically discussed the use of penalized regression models for analyzing microarray data. We generalize the two-class penalized t/F-statistics proposed by Wu to multi-class microarray data. We formally derive the ad hoc shrunken centroid used by Tibshirani et al. using the (1) penalized regression models. And we show that the penalized linear regression models provide a rigorous and unified statistical framework for sample classification and differential gene expression detection.

  3. The influence of image sensor irradiation damage on the tracking and pointing accuracy of optical communication system

    NASA Astrophysics Data System (ADS)

    Li, Xiaoliang; Luo, Lei; Li, Pengwei; Yu, Qingkui

    2018-03-01

    The image sensor in satellite optical communication system may generate noise due to space irradiation damage, leading to deviation for the determination of the light spot centroid. Based on the irradiation test data of CMOS devices, simulated defect spots in different sizes have been used for calculating the centroid deviation value by grey-level centroid algorithm. The impact on tracking & pointing accuracy of the system has been analyzed. The results show that both the amount and the position of irradiation-induced defect pixels contribute to spot centroid deviation. And the larger spot has less deviation. At last, considering the space radiation damage, suggestions are made for the constraints of spot size selection.

  4. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  5. Transforming Social Regularities in a Multicomponent Community-Based Intervention: A Case Study of Professionals' Adaptability to Better Support Parents to Meet Their Children's Needs.

    PubMed

    Quiroz Saavedra, Rodrigo; Brunson, Liesette; Bigras, Nathalie

    2017-06-01

    This paper presents an in-depth case study of the dynamic processes of mutual adjustment that occurred between two professional teams participating in a multicomponent community-based intervention (CBI). Drawing on the concept of social regularities, we focus on patterns of social interaction within and across the two microsystems involved in delivering the intervention. Two research strategies, narrative analysis and structural network analysis, were used to reveal the social regularities linking the two microsystems. Results document strategies and actions undertaken by the professionals responsible for the intervention to modify intersetting social regularities to deal with a problem situation that arose during the course of one intervention cycle. The results illustrate how key social regularities were modified in order to resolve the problem situation and allow the intervention to continue to function smoothly. We propose that these changes represent a transition to a new state of the ecological intervention system. This transformation appeared to be the result of certain key intervening mechanisms: changing key role relationships, boundary spanning, and synergy. The transformation also appeared to be linked to positive setting-level and individual-level outcomes: confidence of key team members, joint planning, decision-making and intervention activities, and the achievement of desired intervention objectives. © Society for Community Research and Action 2017.

  6. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  7. Shack-Hartmann wavefront sensor with large dynamic range.

    PubMed

    Xia, Mingliang; Li, Chao; Hu, Lifa; Cao, Zhaoliang; Mu, Quanquan; Xuan, Li

    2010-01-01

    A new spot centroid detection algorithm for a Shack-Hartmann wavefront sensor (SHWFS) is experimentally investigated. The algorithm is a kind of dynamic tracking algorithm that tracks and calculates the corresponding spot centroid of the current spot map based on the spot centroid of the previous spot map, according to the strong correlation of the wavefront slope and the centroid of the corresponding spot between temporally adjacent SHWFS measurements. That is, for adjacent measurements, the spot centroid movement will usually fall within some range. Using the algorithm, the dynamic range of an SHWFS can be expanded by a factor of three in the measurement of tilt aberration compared with the conventional algorithm, more than 1.3 times in the measurement of defocus aberration, and more than 2 times in the measurement of the mixture of spherical aberration plus coma aberration. The algorithm is applied in our SHWFS to measure the distorted wavefront of the human eye. The experimental results of the adaptive optics (AO) system for retina imaging are presented to prove its feasibility for highly aberrated eyes.

  8. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Numerical Study of Sound Emission by 2D Regular and Chaotic Vortex Configurations

    NASA Astrophysics Data System (ADS)

    Knio, Omar M.; Collorec, Luc; Juvé, Daniel

    1995-02-01

    The far-field noise generated by a system of three Gaussian vortices lying over a flat boundary is numerically investigated using a two-dimensional vortex element method. The method is based on the discretization of the vorticity field into a finite number of smoothed vortex elements of spherical overlapping cores. The elements are convected in a Lagrangian reference along particle trajectories using the local velocity vector, given in terms of a desingularized Biot-Savart law. The initial structure of the vortex system is triangular; a one-dimensional family of initial configurations is constructed by keeping one side of the triangle fixed and vertical, and varying the abscissa of the centroid of the remaining vortex. The inviscid dynamics of this vortex configuration are first investigated using non-deformable vortices. Depending on the aspect ratio of the initial system, regular or chaotic motion occurs. Due to wall-related symmetries, the far-field sound always exhibits a time-independent quadrupolar directivity with maxima parallel end perpendicular to the wall. When regular motion prevails, the noise spectrum is dominated by discrete frequencies which correspond to the fundamental system frequency and its superharmonics. For chaotic motion, a broadband spectrum is obtained; computed soundlevels are substantially higher than in non-chaotic systems. A more sophisticated analysis is then performed which accounts for vortex core dynamics. Results show that the vortex cores are susceptible to inviscid instability which leads to violent vorticity reorganization within the core. This phenomenon has little effect on the large-scale features of the motion of the system or on low frequency sound emission. However, it leads to the generation of a high-frequency noise band in the acoustic pressure spectrum. The latter is observed in both regular and chaotic system simulations.

  10. Spatial Analysis of “Crazy Quilts”, a Class of Potentially Random Aesthetic Artefacts

    PubMed Central

    Westphal-Fitch, Gesche; Fitch, W. Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. “Crazy quilts” represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures. PMID:24066095

  11. Spatial analysis of "crazy quilts", a class of potentially random aesthetic artefacts.

    PubMed

    Westphal-Fitch, Gesche; Fitch, W Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. "Crazy quilts" represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures.

  12. Description of Panel Method Code ANTARES

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; George, Mike (Technical Monitor)

    2000-01-01

    Panel method code ANTARES was developed to compute wall interference corrections in a rectangular wind tunnel. The code uses point doublets to represent blockage effects and line doublets to represent lifting effects of a wind tunnel model. Subsonic compressibility effects are modeled by applying the Prandtl-Glauert transformation. The closed wall, open jet, or perforated wall boundary condition may be assigned to a wall panel centroid. The tunnel walls can be represented by using up to 8000 panels. The accuracy of panel method code ANTARES was successfully investigated by comparing solutions for the closed wall and open jet boundary condition with corresponding Method of Images solutions. Fourier transform solutions of a two-dimensional wind tunnel flow field were used to check the application of the perforated wall boundary condition. Studies showed that the accuracy of panel method code ANTARES can be improved by increasing the total number of wall panels in the circumferential direction. It was also shown that the accuracy decreases with increasing free-stream Mach number of the wind tunnel flow field.

  13. Characteristics of motive force derived from trajectory analysis of Amoeba proteus.

    PubMed

    Masaki, Noritaka; Miyoshi, Hiromi; Tsuchiya, Yoshimi

    2007-01-01

    We used a monochromatic charge-coupled-device camera to observe the migration behavior of Amoeba proteus every 5 s over a time course of 10000 s in order to investigate the characteristics of its centroid movement (cell velocity) over the long term. Fourier transformation of the time series of the cell velocity revealed that its power spectrum exhibits a Lorentz type profile with a relaxation time of a few hundred seconds. Moreover, some sharp peaks were found in the power spectrum, where the ratios of any two frequencies corresponding to the peaks were expressed as simple rational numbers. Analysis of the trajectory using a Langevin equation showed that the power spectrum reflects characteristics of the cell's motive force. These results suggest that some phenomena relating to the cell's motility, such as protoplasmic streaming and the sol-gel transformation of actin filaments, which seem to be independent phenomena and have different relaxation times, interact with each other and cooperatively participate in the generation process of the motive force.

  14. Concurrent Timbres in Orchestration: a Perceptual Study of Factors Determining "blend"

    NASA Astrophysics Data System (ADS)

    Sandell, Gregory John

    Orchestration often involves selecting instruments for concurrent presentation, as in melodic doubling or chords. One evaluation of the aural outcome of such choices is along the continuum of "blend": whether the instruments fuse into a single composite timbre, segregate into distinct timbral entities, or fall somewhere in between the two extremes. This study investigates, through perceptual experimentation, the acoustical correlates of blend for 15 natural-sounding orchestral instruments presented in concurrently-sounding pairs (e.g. flute-cello, trumpet -oboe, etc.). Ratings of blend showed primary effects for centroid (the location of the midpoint of the spectral energy distribution) and duration of the onset for the tones. Lower average values of both centroid and onset duration for a pair of tones led to increased blends, as did closeness in value for the two factors. Blend decreased (instruments segregated) with higher average values or increased difference in value for the two factors. The musical interval of presentation slightly affected the relative importance of these two mechanisms, with unison intervals determined more by lower average centroid, and minor thirds determined more by closeness in centroid. The contribution of onset in general was slightly more pronounced in the unison conditions than in the minor third condition. Additional factors contributing to blend were correlation of amplitude and centroid envelopes (blend increased as temporal patterns rose and fell in synchrony) and similarity in the overall amount of fundamental frequency perturbation (decreased blend with increasing jitter from both tones). To confirm the importance of centroid as an independent factor determining blend, pairs of tones including instruments with artificially changed centroids were rated for blend. Judgments for several versions of the same instrument pair showed that blend decreased as the altered instrument increased in centroid, corroborating the earlier experiments. Other factors manipulated were amplitude level and the degree of inharmonicity. A survey of orchestration manuals showed many illustrations of "blending" combinations of instruments that were consistent with the results of these experiments. This study's acoustically-based guidelines for blend augment instance-based methods of traditional orchestration teaching, providing underlying abstractions helpful for evaluating the blend of arbitrary combinations of instruments.

  15. Multifractal Omori law for earthquake triggering: new tests on the California, Japan and worldwide catalogues

    NASA Astrophysics Data System (ADS)

    Ouillon, G.; Sornette, D.; Ribeiro, E.

    2009-07-01

    The Multifractal Stress-Activated model is a statistical model of triggered seismicity based on mechanical and thermodynamic principles. It predicts that, above a triggering magnitude cut-off M0, the exponent p of the Omori law for the time decay of the rate of aftershocks is a linear increasing function p(M) = a0M + b0 of the main shock magnitude M. We previously reported empirical support for this prediction, using the Southern California Earthquake Center (SCEC) catalogue. Here, we confirm this observation using an updated, longer version of the same catalogue, as well as new methods to estimate p. One of this methods is the newly defined Scaling Function Analysis (SFA), adapted from the wavelet transform. This method is able to measure a mathematical singularity (hence a p-value), erasing the possible regular part of a time-series. The SFA also proves particularly efficient to reveal the coexistence and superposition of several types of relaxation laws (typical Omori sequences and short-lived swarms sequences) which can be mixed within the same catalogue. Another new method consists in monitoring the largest aftershock magnitude observed in successive time intervals, and thus shortcuts the problem of missing events with small magnitudes in aftershock catalogues. The same methods are used on data from the worldwide Harvard Centroid Moment Tensor (CMT) catalogue and show results compatible with those of Southern California. For the Japan Meteorological Agency (JMA) catalogue, we still observe a linear dependence of p on M, but with a smaller slope. The SFA shows however that results for this catalogue may be biased by numerous swarm sequences, despite our efforts to remove them before the analysis.

  16. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  17. Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing

    NASA Astrophysics Data System (ADS)

    Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis

    2017-04-01

    The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced entropy values: low for pure volumes, and high for different possible combinations of mixed hydrometeors. The parametrized entropy is further on applied to real polarimetric C and X band radar datasets, where we demonstrate the potential of linear de-mixing using a simplex formed by a set of pre-defined centroids in the five-dimensional space. As main outcome, the proposed approach allows to provide plausible proportions of the different hydrometeors contained in a given radar sampling volume. [1] Besic, N., Figueras i Ventura, J., Grazioli, J., Gabella, M., Germann, U., and Berne, A.: Hydrometeor classification through statistical clustering of polarimetric radar measurements: a semi-supervised approach, Atmos. Meas. Tech., 9, 4425-4445, doi:10.5194/amt-9-4425-2016, 2016.

  18. Distributed Wavelet Transform for Irregular Sensor Network Grids

    DTIC Science & Technology

    2005-01-01

    implement it in a multi-hop, wireless sensor network ; and illustrate with several simulations. The new transform performs on par with conventional wavelet methods in a head-to-head comparison on a regular grid of sensor nodes.

  19. Computational study of the melting-freezing transition in the quantum hard-sphere system for intermediate densities. II. Structural features.

    PubMed

    Sesé, Luis M; Bailey, Lorna E

    2007-04-28

    The structural features of the quantum hard-sphere system in the region of the fluid-face-centered-cubic-solid transition, for reduced number densities 0.45

  20. Generic Design Procedures for the Repair of Acoustically Damaged Panels

    DTIC Science & Technology

    2008-12-01

    plate for component 1 h2 Thickness of plate for component 2 h3 Thickness of plate for component 3 h13 Distance from centroid of component 1 to centroid...E1 View AA Simply supported/clamped plate h13 Ly Lx y x d3 d1 y 2a Figure 4: Geometry for constrained layer damping of a simply...dimensions, properties and parameters Physical dimensions (Figure 4) Material properties Key parameters h1, h2 , h3 , h13 , Lx , Ly , 2a E1 , E3 , G2

  1. Ambiguity Of Doppler Centroid In Synthetic-Aperture Radar

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung; Curlander, John C.

    1991-01-01

    Paper discusses performances of two algorithms for resolution of ambiguity in estimated Doppler centroid frequency of echoes in synthetic-aperture radar. One based on range-cross-correlation technique, other based on multiple-pulse-repetition-frequency technique.

  2. Complex supramolecular interfacial tessellation through convergent multi-step reaction of a dissymmetric simple organic precursor

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Qi; Paszkiewicz, Mateusz; Du, Ping; Zhang, Liding; Lin, Tao; Chen, Zhi; Klyatskaya, Svetlana; Ruben, Mario; Seitsonen, Ari P.; Barth, Johannes V.; Klappenberger, Florian

    2018-03-01

    Interfacial supramolecular self-assembly represents a powerful tool for constructing regular and quasicrystalline materials. In particular, complex two-dimensional molecular tessellations, such as semi-regular Archimedean tilings with regular polygons, promise unique properties related to their nontrivial structures. However, their formation is challenging, because current methods are largely limited to the direct assembly of precursors, that is, where structure formation relies on molecular interactions without using chemical transformations. Here, we have chosen ethynyl-iodophenanthrene (which features dissymmetry in both geometry and reactivity) as a single starting precursor to generate the rare semi-regular (3.4.6.4) Archimedean tiling with long-range order on an atomically flat substrate through a multi-step reaction. Intriguingly, the individual chemical transformations converge to form a symmetric alkynyl-Ag-alkynyl complex as the new tecton in high yields. Using a combination of microscopy and X-ray spectroscopy tools, as well as computational modelling, we show that in situ generated catalytic Ag complexes mediate the tecton conversion.

  3. Method of particle trajectory recognition in particle flows of high particle concentration using a candidate trajectory tree process with variable search areas

    DOEpatents

    Shaffer, Franklin D.

    2013-03-12

    The application relates to particle trajectory recognition from a Centroid Population comprised of Centroids having an (x, y, t) or (x, y, f) coordinate. The method is applicable to visualization and measurement of particle flow fields of high particle. In one embodiment, the centroids are generated from particle images recorded on camera frames. The application encompasses digital computer systems and distribution mediums implementing the method disclosed and is particularly applicable to recognizing trajectories of particles in particle flows of high particle concentration. The method accomplishes trajectory recognition by forming Candidate Trajectory Trees and repeated searches at varying Search Velocities, such that initial search areas are set to a minimum size in order to recognize only the slowest, least accelerating particles which produce higher local concentrations. When a trajectory is recognized, the centroids in that trajectory are removed from consideration in future searches.

  4. Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.

    PubMed

    Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew

    2017-08-10

    Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.

  5. Effects of window size and shape on accuracy of subpixel centroid estimation of target images

    NASA Technical Reports Server (NTRS)

    Welch, Sharon S.

    1993-01-01

    A new algorithm is presented for increasing the accuracy of subpixel centroid estimation of (nearly) point target images in cases where the signal-to-noise ratio is low and the signal amplitude and shape vary from frame to frame. In the algorithm, the centroid is calculated over a data window that is matched in width to the image distribution. Fourier analysis is used to explain the dependency of the centroid estimate on the size of the data window, and simulation and experimental results are presented which demonstrate the effects of window size for two different noise models. The effects of window shape were also investigated for uniform and Gaussian-shaped windows. The new algorithm was developed to improve the dynamic range of a close-range photogrammetric tracking system that provides feedback for control of a large gap magnetic suspension system (LGMSS).

  6. Photometric analysis in the Kepler Science Operations Center pipeline

    NASA Astrophysics Data System (ADS)

    Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.

    2010-07-01

    We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.

  7. Photometric Analysis in the Kepler Science Operations Center Pipeline

    NASA Technical Reports Server (NTRS)

    Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.

    2010-01-01

    We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (thirty minute) and 512 short cadence (one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss the science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.

  8. QPO from the rapid burster

    NASA Astrophysics Data System (ADS)

    Dotani, T.

    1989-11-01

    Strong Quasi-Periodic Oscillations (QPO) in type 2 bursts from the rapid burster with Ginga were detected. The QPD have centroid frequency of approximately 5 and 2 Hz during bursts which lasted for approximately 10 and 30 sec, respectively. The QPO observations were analyzed and the following results were obtained: QPO centroid frequencies have some correlation with burst duration and peak count rate, however the correlations are complicated and the burst parameters do not uniquely determine the QPO centroid frequency; the appearance of the QPO is closely related to the so-called timescale-invariant profile of the bursts; the QPO are significant only in the even numbered peaks of the profile and not in the odd numbered peaks; in most cases the QPO centroid frequency decreases up to approximately 25 percent during a burst. The energy spectra at the QPO peaks and valleys were investigated and the QPO peaks were found to have significantely higher blackbody temperature than the QPD valleys.

  9. The KS Method in Light of Generalized Euler Parameters.

    DTIC Science & Technology

    1980-01-01

    motion for the restricted two-body problem is trans- formed via the Kustaanheimo - Stiefel transformation method (KS) into a dynamical equation in the... Kustaanheimo - Stiefel2 transformation method (KS) in the two-body problem. Many papers have appeared in which specific problems or applications have... TRANSFORMATION MATRIX P. Kustaanheimo and E. Stiefel2 proposed a regularization method by intro- ducing a 4 x 4 transformation matrix and four-component

  10. A New Instantaneous Frequency Measure Based on The Stockwell Transform

    NASA Astrophysics Data System (ADS)

    yedlin, M. J.; Ben-Horrin, Y.; Fraser, J. D.

    2011-12-01

    We propose the use of a new transform, the Stockwell transform[1], as a means of creating time-frequency maps and applying them to distinguish blasts from earthquakes. This new transform, the Stockwell transform can be considered as a variant of the continuous wavelet transform, that preserves the absolute phase.The Stockwell transform employs a complex Morlet mother wavelet. The novelty of this transform lies in its resolution properties. High frequencies in the candidate signal are well-resolved in time but poorly resolved in frequency, while the converse is true for low frequency signal components. The goal of this research is to obtain the instantaneous frequency as a function of time for both the earthquakes and the blasts. Two methods will be compared. In the first method, we will compute the analytic signal, the envelope and the instantaneous phase as a function of time[2]. The instantaneous phase derivative will yield the instantaneous angular frequency. The second method will be based on time-frequency analysis using the Stockwell transform. The Stockwell transform will be computed in non-redundant fashion using a dyadic representation[3]. For each time-point, the frequency centroid will be computed -- a representation for the most likely frequency at that time. A detailed comparison will be presented for both approaches to the computation of the instantaneous frequency. An advantage of the Stockwell approach is that no differentiation is applied. The Hilbert transform method can be less sensitive to edge effects. The goal of this research is to see if the new Stockwell-based method could be used as a discriminant between earthquakes and blasts. References [1] Stockwell, R.G., Mansinha, L. and Lowe, R.P. "Localization of the complex spectrum: the S transform", IEEE Trans. Signal Processing, vol.44, no.4, pp.998-1001, (1996). [2]Taner, M.T., Koehler, F. "Complex seismic trace analysis", Geophysics, vol. 44, Issue 6, pp. 1041-1063 (1979). [3] Brown, R.A., Lauzon, M.L. and Frayne, R. "General Description of Linear Time-Frequency Transforms and Formulation of a Fast, Invertible Transform That Samples the Continuous S-Transform Spectrum Nonredundantly", IEEE Transactions on Signal Processing, 1:281-90 (2010).

  11. Transforming geographic scale: a comparison of combined population and areal weighting to other interpolation methods.

    PubMed

    Hallisey, Elaine; Tai, Eric; Berens, Andrew; Wilt, Grete; Peipins, Lucy; Lewis, Brian; Graham, Shannon; Flanagan, Barry; Lunsford, Natasha Buchanan

    2017-08-07

    Transforming spatial data from one scale to another is a challenge in geographic analysis. As part of a larger, primary study to determine a possible association between travel barriers to pediatric cancer facilities and adolescent cancer mortality across the United States, we examined methods to estimate mortality within zones at varying distances from these facilities: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For the primary study, we used county mortality counts from the National Center for Health Statistics (NCHS) and population data by census tract for the United States to estimate zone mortality. In this paper, to evaluate the five mortality estimation methods, we employed address-level mortality data from the state of Georgia in conjunction with census data. Our objective here is to identify the simplest method that returns accurate mortality estimates. The distribution of Georgia county adolescent cancer mortality counts mirrors the Poisson distribution of the NCHS counts for the U.S. Likewise, zone value patterns, along with the error measures of hierarchy and fit, are similar for the state and the nation. Therefore, Georgia data are suitable for methods testing. The mean absolute value arithmetic differences between the observed counts for Georgia and the five methods were 5.50, 5.00, 4.17, 2.74, and 3.43, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences showed no statistical difference among the methods. However, we found a strong positive correlation (r = 0.63) between estimated Georgia mortality rates and combined weighting rates at zone level. Most importantly, Bland-Altman plots indicated acceptable agreement between paired arithmetic differences of Georgia rates and combined population and areal weighting rates. This research contributes to the literature on areal interpolation, demonstrating that combined population and areal weighting, compared to other tested methods, returns the most accurate estimates of mortality in transforming small counts by county to aggregated counts for large, non-standard study zones. This conceptually simple cartographic method should be of interest to public health practitioners and researchers limited to analysis of data for relatively large enumeration units.

  12. Approximate isotropic cloak for the Maxwell equations

    NASA Astrophysics Data System (ADS)

    Ghosh, Tuhin; Tarikere, Ashwin

    2018-05-01

    We construct a regular isotropic approximate cloak for the Maxwell system of equations. The method of transformation optics has enabled the design of electromagnetic parameters that cloak a region from external observation. However, these constructions are singular and anisotropic, making practical implementation difficult. Thus, regular approximations to these cloaks have been constructed that cloak a given region to any desired degree of accuracy. In this paper, we show how to construct isotropic approximations to these regularized cloaks using homogenization techniques so that one obtains cloaking of arbitrary accuracy with regular and isotropic parameters.

  13. Centroids evaluation of the images obtained with the conical null-screen corneal topographer

    NASA Astrophysics Data System (ADS)

    Osorio-Infante, Arturo I.; Armengol-Cruz, Victor de Emanuel; Campos-García, Manuel; Cossio-Guerrero, Cesar; Marquez-Flores, Jorge; Díaz-Uribe, José Rufino

    2016-09-01

    In this work, we propose some algorithms to recover the centroids of the resultant image obtained by a conical nullscreen based corneal topographer. With these algorithms, we obtain the region of interest (roi) of the original image and using an image-processing algorithm, we calculate the geometric centroid of each roi. In order to improve our algorithm performance, we use different settings of null-screen targets, changing their size and number. We also improved the illumination system to avoid inhomogeneous zones in the corneal images. Finally, we report some corneal topographic measurements with the best setting we found.

  14. An Accurate Centroiding Algorithm for PSF Reconstruction

    NASA Astrophysics Data System (ADS)

    Lu, Tianhuan; Luo, Wentao; Zhang, Jun; Zhang, Jiajun; Li, Hekun; Dong, Fuyu; Li, Yingke; Liu, Dezi; Fu, Liping; Li, Guoliang; Fan, Zuhui

    2018-07-01

    In this work, we present a novel centroiding method based on Fourier space Phase Fitting (FPF) for Point Spread Function (PSF) reconstruction. We generate two sets of simulations to test our method. The first set is generated by GalSim with an elliptical Moffat profile and strong anisotropy that shifts the center of the PSF. The second set of simulations is drawn from CFHT i band stellar imaging data. We find non-negligible anisotropy from CFHT stellar images, which leads to ∼0.08 scatter in units of pixels using a polynomial fitting method (Vakili & Hogg). When we apply the FPF method to estimate the centroid in real space, the scatter reduces to ∼0.04 in S/N = 200 CFHT-like sample. In low signal-to-noise ratio (S/N; 50 and 100) CFHT-like samples, the background noise dominates the shifting of the centroid; therefore, the scatter estimated from different methods is similar. We compare polynomial fitting and FPF using GalSim simulation with optical anisotropy. We find that in all S/N (50, 100, and 200) samples, FPF performs better than polynomial fitting by a factor of ∼3. In general, we suggest that in real observations there exists anisotropy that shifts the centroid, and thus, the FPF method provides a better way to accurately locate it.

  15. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    NASA Astrophysics Data System (ADS)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  16. Can Birds Perceive Rhythmic Patterns? A Review and Experiments on a Songbird and a Parrot Species

    PubMed Central

    ten Cate, Carel; Spierings, Michelle; Hubert, Jeroen; Honing, Henkjan

    2016-01-01

    While humans can easily entrain their behavior with the beat in music, this ability is rare among animals. Yet, comparative studies in non-human species are needed if we want to understand how and why this ability evolved. Entrainment requires two abilities: (1) recognizing the regularity in the auditory stimulus and (2) the ability to adjust the own motor output to the perceived pattern. It has been suggested that beat perception and entrainment are linked to the ability for vocal learning. The presence of some bird species showing beat induction, and also the existence of vocal learning as well as vocal non-learning bird taxa, make them relevant models for comparative research on rhythm perception and its link to vocal learning. Also, some bird vocalizations show strong regularity in rhythmic structure, suggesting that birds might perceive rhythmic structures. In this paper we review the available experimental evidence for the perception of regularity and rhythms by birds, like the ability to distinguish regular from irregular stimuli over tempo transformations and report data from new experiments. While some species show a limited ability to detect regularity, most evidence suggests that birds attend primarily to absolute and not relative timing of patterns and to local features of stimuli. We conclude that, apart from some large parrot species, there is limited evidence for beat and regularity perception among birds and that the link to vocal learning is unclear. We next report the new experiments in which zebra finches and budgerigars (both vocal learners) were first trained to distinguish a regular from an irregular pattern of beats and then tested on various tempo transformations of these stimuli. The results showed that both species reduced the discrimination after tempo transformations. This suggests that, as was found in earlier studies, they attended mainly to local temporal features of the stimuli, and not to their overall regularity. However, some individuals of both species showed an additional sensitivity to the more global pattern if some local features were left unchanged. Altogether our study indicates both between and within species variation, in which birds attend to a mixture of local and to global rhythmic features. PMID:27242635

  17. Relativity and the TRS-80.

    ERIC Educational Resources Information Center

    Levin, Sidney

    1984-01-01

    Presents the listing (TRS-80) for a computer program which derives the relativistic equation (employing as a model the concept of a moving clock which emits photons at regular intervals) and calculates transformations of time, mass, and length with increasing velocities (Einstein-Lorentz transformations). (JN)

  18. Intraoperative cyclorotation and pupil centroid shift during LASIK and PRK.

    PubMed

    Narváez, Julio; Brucks, Matthew; Zimmerman, Grenith; Bekendam, Peter; Bacon, Gregory; Schmid, Kristin

    2012-05-01

    To determine the degree of cyclorotation and centroid shift in the x and y axis that occurs intraoperatively during LASIK and photorefractive keratectomy (PRK). Intraoperative cyclorotation and centroid shift were measured in 63 eyes from 34 patients with a mean age of 34 years (range: 20 to 56 years) undergoing either LASIK or PRK. Preoperatively, an iris image of each eye was obtained with the VISX WaveScan Wavefront System (Abbott Medical Optics Inc) with iris registration. A VISX Star S4 (Abbott Medical Optics Inc) laser was later used to measure cyclotorsion and pupil centroid shift at the beginning of the refractive procedure and after flap creation or epithelial removal. The mean change in intraoperative cyclorotation was 1.48±1.11° in LASIK eyes and 2.02±2.63° in PRK eyes. Cyclorotation direction changed by >2° in 21% of eyes after flap creation in LASIK and in 32% of eyes after epithelial removal in PRK. The respective mean intraoperative shift in the x axis and y axis was 0.13±0.15 mm and 0.17±0.14 mm, respectively, in LASIK eyes, and 0.09±0.07 mm and 0.10±0.13 mm, respectively, in PRK eyes. Intraoperative centroid shifts >100 μm in either the x axis or y axis occurred in 71% of LASIK eyes and 55% of PRK eyes. Significant changes in cyclotorsion and centroid shifts were noted prior to surgery as well as intraoperatively with both LASIK and PRK. It may be advantageous to engage iris registration immediately prior to ablation to provide a reference point representative of eye position at the initiation of laser delivery. Copyright 2012, SLACK Incorporated.

  19. Evaluation of centroiding algorithm error for Nano-JASMINE

    NASA Astrophysics Data System (ADS)

    Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki

    2014-08-01

    The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.

  20. Automatic extraction of nuclei centroids of mouse embryonic cells from fluorescence microscopy images.

    PubMed

    Bashar, Md Khayrul; Komatsu, Koji; Fujimori, Toshihiko; Kobayashi, Tetsuya J

    2012-01-01

    Accurate identification of cell nuclei and their tracking using three dimensional (3D) microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i) Pre-processing, (ii) Local enhancement, and (iii) Centroid extraction. The first step includes two variations: first variation (Variant-1) uses the whole 3D pre-processed image, whereas the second one (Variant-2) modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2), when compared with an existing method (86.06% and 90.11%), originally developed for analyzing C. elegans images.

  1. Recognition algorithm for assisting ovarian cancer diagnosis from coregistered ultrasound and photoacoustic images: ex vivo study

    NASA Astrophysics Data System (ADS)

    Alqasemi, Umar; Kumavor, Patrick; Aguirre, Andres; Zhu, Quing

    2012-12-01

    Unique features and the underlining hypotheses of how these features may relate to the tumor physiology in coregistered ultrasound and photoacoustic images of ex vivo ovarian tissue are introduced. The images were first compressed with wavelet transform. The mean Radon transform of photoacoustic images was then computed and fitted with a Gaussian function to find the centroid of a suspicious area for shift-invariant recognition process. Twenty-four features were extracted from a training set by several methods, including Fourier transform, image statistics, and different composite filters. The features were chosen from more than 400 training images obtained from 33 ex vivo ovaries of 24 patients, and used to train three classifiers, including generalized linear model, neural network, and support vector machine (SVM). The SVM achieved the best training performance and was able to exclusively separate cancerous from non-cancerous cases with 100% sensitivity and specificity. At the end, the classifiers were used to test 95 new images obtained from 37 ovaries of 20 additional patients. The SVM classifier achieved 76.92% sensitivity and 95.12% specificity. Furthermore, if we assume that recognizing one image as a cancer is sufficient to consider an ovary as malignant, the SVM classifier achieves 100% sensitivity and 87.88% specificity.

  2. Positron annihilation studies in the field induced depletion regions of metal-oxide-semiconductor structures

    NASA Astrophysics Data System (ADS)

    Asoka-Kumar, P.; Leung, T. C.; Lynn, K. G.; Nielsen, B.; Forcier, M. P.; Weinberg, Z. A.; Rubloff, G. W.

    1992-06-01

    The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions.

  3. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  4. Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier

    NASA Astrophysics Data System (ADS)

    Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar

    2015-02-01

    In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.

  5. 78 FR 32991 - Connect America Fund

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-03

    ..., 2013. The full text of this document is available for public inspection during regular business hours.... Introduction 1. In the USF/ICC Transformation Order, 76 FR 73830, November 29, 2011, the Commission... the USF/ICC Transformation Order, an unsubsidized competitor in areas where the price cap carrier will...

  6. A level set approach for shock-induced α-γ phase transition of RDX

    NASA Astrophysics Data System (ADS)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  7. [Transformation Regularity of Nitrogen in Aqueous Product Derived from Hydrothermal Liquefaction of Sewage Sludge in Subcritical Water].

    PubMed

    Sun, Yan-qing; Sun, Zhen; Zhang, Jing-lai

    2015-06-01

    Hydrothermal liquefaction in subcritical water is a potential way to treat sewage sludge as a resource rather than a waste. This study focused on the transformation regularity of nitrogen in aqueous product which was derived from hydrothermal liquefaction of sewage sludge under different operating conditions. Results showed, within the studied temperature scope and time span, the concentration of total nitrogen (TN) fluctuated in the range of 2867.62 mg x L(-1) to 4171.30 mg x L(-1). The two major exiting formation of nitrogen in aqueous product was ammonia nitrogen (NH4+ -N) and organic nitrogen (Org-N). NH4+ -N possessed 54.6%-90.7% of TN, while Org-N possessed 7.4%-44.5%. The concentration of nitrate nitrogen (NO- -N) was far more less than NH4+ -N and Org-N. Temperature had a great influence on the transformation regularity of nitrogen. Both the concentration of TN and Org-N increased accordingly to the increase of reaction temperature. With the reaction time prolonging, the concentration of TN and Org-N increased, while the concentration of NH4+ -N increased first, then became stationary, and then decreased slightly.

  8. Simple picture for neutrino flavor transformation in supernovae

    NASA Astrophysics Data System (ADS)

    Duan, Huaiyu; Fuller, George M.; Qian, Yong-Zhong

    2007-10-01

    We can understand many recently discovered features of flavor evolution in dense, self-coupled supernova neutrino and antineutrino systems with a simple, physical scheme consisting of two quasistatic solutions. One solution closely resembles the conventional, adiabatic single-neutrino Mikheyev-Smirnov-Wolfenstein (MSW) mechanism, in that neutrinos and antineutrinos remain in mass eigenstates as they evolve in flavor space. The other solution is analogous to the regular precession of a gyroscopic pendulum in flavor space, and has been discussed extensively in recent works. Results of recent numerical studies are best explained with combinations of these solutions in the following general scenario: (1) Near the neutrino sphere, the MSW-like many-body solution obtains. (2) Depending on neutrino vacuum mixing parameters, luminosities, energy spectra, and the matter density profile, collective flavor transformation in the nutation mode develops and drives neutrinos away from the MSW-like evolution and toward regular precession. (3) Neutrino and antineutrino flavors roughly evolve according to the regular precession solution until neutrino densities are low. In the late stage of the precession solution, a stepwise swapping develops in the energy spectra of νe and νμ/ντ. We also discuss some subtle points regarding adiabaticity in flavor transformation in dense-neutrino systems.

  9. Centroid and Theoretical Rotation: Justification for Their Use in Q Methodology Research

    ERIC Educational Resources Information Center

    Ramlo, Sue

    2016-01-01

    This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…

  10. Saddlepoint Approximations in Conditional Inference

    DTIC Science & Technology

    1990-06-11

    Then the inverse transform can be written as (%, Y) = (T, q(T, Z)) for some function q. When the transform is not one to one, the domain should be...general regularity conditions described at the beginning of this section hold and that the solution t1 in (9) exists. Denote the inverse transform by (X, Y...density hn(t 0 l z) are desired. Then the inverse transform (Y, ) = (T, q(T, Z)) exists and the variable v in the cumulant generating function K(u, v

  11. Transformations in Higher Education: Online Distance Learning

    ERIC Educational Resources Information Center

    Kobayashi, Victor

    2002-01-01

    Higher education is undergoing radical shifts that are part of the larger wave of changes taking place in the society. The transformation affects all sectors of higher education, especially distance learning and how it relates to the University's regular offerings. In this article, the author begins with clarifying the terms commonly associated…

  12. Fuselet Authoring, Execution, and Management in Support of Global Strike Operations

    DTIC Science & Technology

    2008-07-01

    can be implemented in a variety of languages such as Java , Extensible Stylesheet Language Transformations (XSLT), Groovy, and Jython. A primary...measurable and manageable. By creating transformations from reusable, parameterizable components rather than ad-hoc scripts , transformation logic is...deployable to any Java 2 Platform, Enterprise Edition (J2EE) server, but is tested regularly on the JBoss Application Server (AS) version 4.0.4.GA

  13. Musical and linguistic listening modes in the speech-to-song illusion bias timing perception and absolute pitch memory.

    PubMed

    Graber, Emily; Simchy-Gross, Rhimmon; Margulis, Elizabeth Hellmuth

    2017-12-01

    The speech-to-song (STS) illusion is a phenomenon in which some spoken utterances perceptually transform to song after repetition [Deutsch, Henthorn, and Lapidis (2011). J. Acoust. Soc. Am. 129, 2245-2252]. Tierney, Dick, Deutsch, and Sereno [(2013). Cereb. Cortex. 23, 249-254] developed a set of stimuli where half tend to transform to perceived song with repetition and half do not. Those that transform and those that do not can be understood to induce a musical or linguistic mode of listening, respectively. By comparing performance on perceptual tasks related to transforming and non-transforming utterances, the current study examines whether the musical mode of listening entails higher sensitivity to temporal regularity and better absolute pitch (AP) memory compared to the linguistic mode. In experiment 1, inter-stimulus intervals within STS trials were steady, slightly variable, or highly variable. Participants reported how temporally regular utterance entrances were. In experiment 2, participants performed an AP memory task after a blocked STS exposure phase. Utterances identically matching those used in the exposure phase were targets among transposed distractors in the test phase. Results indicate that listeners exhibit heightened awareness of temporal manipulations but reduced awareness of AP manipulations to transforming utterances. This methodology establishes a framework for implicitly differentiating musical from linguistic perception.

  14. An orbit simulation study of a geopotential research mission including satellite-to-satellite tracking and disturbance compensation systems

    NASA Technical Reports Server (NTRS)

    Antreasian, Peter G.

    1988-01-01

    Two orbit simulations, one representing the actual Geopotential Research Mission (GRM) orbit and the other representing the orbit estimated from orbit determination techniques, are presented. A computer algorithm was created to simulate GRM's drag compensation mechanism so the fuel expenditure and proof mass trajectories relative to the spacecraft centroid could be calculated for the mission. The results of the GRM DISCOS simulation demonstrated that the spacecraft can essentially be drag-free. The results showed that the centroid of the spacecraft can be controlled so that it will not deviate more than 1.0 mm in any direction from the centroid of the proof mass.

  15. Null hypersurface quantization, electromagnetic duality and asympotic symmetries of Maxwell theory

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Arpan; Hung, Ling-Yan; Jiang, Yikun

    2018-03-01

    In this paper we consider introducing careful regularization at the quantization of Maxwell theory in the asymptotic null infinity. This allows systematic discussions of the commutators in various boundary conditions, and application of Dirac brackets accordingly in a controlled manner. This method is most useful when we consider asymptotic charges that are not localized at the boundary u → ±∞ like large gauge transformations. We show that our method reproduces the operator algebra in known cases, and it can be applied to other space-time symmetry charges such as the BMS transformations. We also obtain the asymptotic form of the U(1) charge following from the electromagnetic duality in an explicitly EM symmetric Schwarz-Sen type action. Using our regularization method, we demonstrate that the charge generates the expected transformation of a helicity operator. Our method promises applications in more generic theories.

  16. Computing travel time when the exact address is unknown: a comparison of point and polygon ZIP code approximation methods.

    PubMed

    Berke, Ethan M; Shi, Xun

    2009-04-29

    Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.

  17. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  18. (E)-2-[(2,4,6-Tri-meth-oxy-benzyl-idene)amino]-phenol.

    PubMed

    Kaewmanee, Narissara; Chantrapromma, Suchada; Boonnak, Nawong; Quah, Ching Kheng; Fun, Hoong-Kun

    2014-01-01

    There are two independent mol-ecules in the asymmetric unit of the title compound, C16H17NO4, with similar conformations but some differences in their bond angles. Each mol-ecule adopts a trans configuration with respect to the methyl-idene C=N bond and is twisted with a dihedral angle between the two substituted benzene rings of 80.52 (7)° in one mol-ecule and 83.53 (7)° in the other. All meth-oxy groups are approximately coplanar with the attached benzene rings, with Cmeth-yl-O-C-C torsion angles ranging from -6.7 (2) to 5.07 (19)°. In the crystal, independent mol-ecules are linked together by O-H⋯N and O-H⋯O hydrogen bonds and a π-π inter-action [centroid-centroid distance of 3.6030 (9) Å], forming a dimer. The dimers are further linked by weak C-H⋯O inter-actions and another π-π inter-action [centroid-centroid distance of 3.9452 (9) Å] into layers lying parallel to the ab plane.

  19. Automated quasi-3D spine curvature quantification and classification

    NASA Astrophysics Data System (ADS)

    Khilari, Rupal; Puchin, Juris; Okada, Kazunori

    2018-02-01

    Scoliosis is a highly prevalent spine deformity that has traditionally been diagnosed through measurement of the Cobb angle on radiographs. More recent technology such as the commercial EOS imaging system, although more accurate, also require manual intervention for selecting the extremes of the vertebrae forming the Cobb angle. This results in a high degree of inter and intra observer error in determining the extent of spine deformity. Our primary focus is to eliminate the need for manual intervention by robustly quantifying the curvature of the spine in three dimensions, making it consistent across multiple observers. Given the vertebrae centroids, the proposed Vertebrae Sequence Angle (VSA) estimation and segmentation algorithm finds the largest angle between consecutive pairs of centroids within multiple inflection points on the curve. To exploit existing clinical diagnostic standards, the algorithm uses a quasi-3-dimensional approach considering the curvature in the coronal and sagittal projection planes of the spine. Experiments were performed with manuallyannotated ground-truth classification of publicly available, centroid-annotated CT spine datasets. This was compared with the results obtained from manual Cobb and Centroid angle estimation methods. Using the VSA, we then automatically classify the occurrence and the severity of spine curvature based on Lenke's classification for idiopathic scoliosis. We observe that the results appear promising with a scoliotic angle lying within +/- 9° of the Cobb and Centroid angle, and vertebrae positions differing by at the most one position. Our system also resulted in perfect classification of scoliotic from healthy spines with our dataset with six cases.

  20. 360-degrees profilometry using strip-light projection coupled to Fourier phase-demodulation.

    PubMed

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-11

    360 degrees (360°) digitalization of three dimensional (3D) solids using a projected light-strip is a well-established technique in academic and commercial profilometers. These profilometers project a light-strip over the digitizing solid while the solid is rotated a full revolution or 360-degrees. Then, a computer program typically extracts the centroid of this light-strip, and by triangulation one obtains the shape of the solid. Here instead of using intensity-based light-strip centroid estimation, we propose to use Fourier phase-demodulation for 360° solid digitalization. The advantage of Fourier demodulation over strip-centroid estimation is that the accuracy of phase-demodulation linearly-increases with the fringe density, while in strip-light the centroid-estimation errors are independent. Here we proposed first to construct a carrier-frequency fringe-pattern by closely adding the individual light-strip images recorded while the solid is being rotated. Next, this high-density fringe-pattern is phase-demodulated using the standard Fourier technique. To test the feasibility of this Fourier demodulation approach, we have digitized two solids with increasing topographic complexity: a Rubik's cube and a plastic model of a human-skull. According to our results, phase demodulation based on the Fourier technique is less noisy than triangulation based on centroid light-strip estimation. Moreover, Fourier demodulation also provides the amplitude of the analytic signal which is a valuable information for the visualization of surface details.

  1. A comparison of methods for calculating population exposure estimates of daily weather for health research.

    PubMed

    Hanigan, Ivan; Hall, Gillian; Dear, Keith B G

    2006-09-13

    To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Options based on values derived from sites internal to postal areas, or from nearest neighbour sites--that is, using proximity polygons around weather stations intersected with postal areas--tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons), is too limited. The most appropriate method conceptually is the use of weather data from sites within 50 kilometres radius of the area weighted to population centres, but a simpler acceptable option is to weight to the geographic centroid.

  2. Combining feature extraction and classification for fNIRS BCIs by regularized least squares optimization.

    PubMed

    Heger, Dominic; Herff, Christian; Schultz, Tanja

    2014-01-01

    In this paper, we show that multiple operations of the typical pattern recognition chain of an fNIRS-based BCI, including feature extraction and classification, can be unified by solving a convex optimization problem. We formulate a regularized least squares problem that learns a single affine transformation of raw HbO(2) and HbR signals. We show that this transformation can achieve competitive results in an fNIRS BCI classification task, as it significantly improves recognition of different levels of workload over previously published results on a publicly available n-back data set. Furthermore, we visualize the learned models and analyze their spatio-temporal characteristics.

  3. Relationships between alcohol intake and atherogenic indices in women.

    PubMed

    Wakabayashi, Ichiro

    2013-01-01

    Light-to-moderate alcohol consumption is known to reduce the risk of coronary artery disease. The purpose of this study was to investigate relationships of alcohol intake with atherogenic indices, such as the ratio of low-density lipoprotein cholesterol to high-density lipoprotein cholesterol (LDL-C/HDL-C ratio) and the ratio of triglycerides to high-density lipoprotein cholesterol (TG/HDL-C ratio), in women. Subjects (14,067 women, 20-45 years) were divided by alcohol intake into three groups of nondrinkers, occasional drinkers, and regular drinkers, and each drinker group was further divided into lower- (<22 g ethanol/drinking day) and greater- (≥ 22 g ethanol/drinking day) quantity drinkers. Atherogenic indices were compared among the alcohol groups. Odds ratio (OR) for high LDL-C/HDL-C ratio or high TG/HDL-C ratio calculated after adjustment for age, body mass index, smoking, and habitual exercise was significantly lower (P < .05) than a reference level of 1.00 in regular or occasional lower- and higher quantity drinkers vs. nondrinkers (OR for high LDL-C/HDL-C ratio, 0.28 (95% confidence interval [95% CI], 0.18-0.44) in regular lower-quantity drinkers, 0.18 (95% CI, 0.12-0.28) in regular higher quantity drinkers, 0.71 (95% CI, 0.61-0.83) in occasional lower-quantity drinkers, and 0.53 (95% CI, 0.44-0.64) in occasional higher quantity drinkers; OR for high TG/HDL-C ratio, 0.52 (95% CI, 0.32-0.85) in regular lower-quantity drinkers, 0.67 (95% CI, 0.47-0.96) in regular higher-quantity drinkers, 0.61 (95% CI, 0.50-0.76) in occasional lower-quantity drinkers, and 0.63 (95% CI, 0.50-0.79) in occasional higher-quantity drinkers. Both LDL-C/HDL-C ratio and log-transformed TG/HDL-C ratio were significantly greater in smokers than in nonsmokers. Both in smokers and nonsmokers, LDL-C/HDL-C ratio and log-transformed TG/HDL-C ratio were significantly lower in regular lower- and higher-quantity drinkers than in nondrinkers. In nonsmokers, LDL-C/HDL-C ratio and log-transformed TG/HDL-C ratio tended to be lower and greater, respectively, in regular greater-quantity drinkers than in regular lower-quantity drinkers. In women, alcohol drinking is inversely associated with atherogenic indices irrespective of smoking status, and the inverse association of alcohol drinking with LDL-C/HDL-C ratio is stronger than that with TG/HDL-C ratio. Copyright © 2013 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  4. Leading and Thriving: How Leadership Education Can Improve First-Year Student Success

    ERIC Educational Resources Information Center

    Stephens, Clinton M.; Beatty, Cameron C.

    2015-01-01

    Leadership development transforms the lives of many students and leadership educators regularly witness these changes. But little research has articulated what is being taught that facilitates this change, how we can make it happen more often, or how we can measure this change. These transformations contribute to desirable outcomes including…

  5. Temperature-dependent sex-reversal by a transformer-2 gene-edited mutation in the spotted wing drosophila, Drosophila suzukii

    USDA-ARS?s Scientific Manuscript database

    Female to male sex reversal was achieved in an emerging agricultural insect pest, Drosophila suzukii, by creating a temperature-sensitive point mutation in the sex-determination gene, transformer-2 (tra-2) using CRISPR/Cas9 (clustered regularly interspaced palindromic repeats/ CRISPR-associated) hom...

  6. Transformative Learning in Postapartheid South Africa: Disruption, Dilemma, and Direction

    ERIC Educational Resources Information Center

    Cox, Amanda J.; John, Vaughn M.

    2016-01-01

    The catalyst for learning and change in transformative learning theory has mostly been explained in terms of a disorientation in a relatively stable life. This article explores a South African, nonformal adult learning program, as a source of "orienting dilemmas," which catalyze learning and change in lives that are regularly and…

  7. Using Transformative Learning as a Framework to Explore Women and Running

    ERIC Educational Resources Information Center

    Hayduk, Dina

    2011-01-01

    This qualitative narrative inquiry explored women's self-perceptions changed through regular participation in running. Transformative learning theory was considered as a possible explanation for the learning and changes adult women experienced. In-depth interviews of 11 adult women who have been running between 1 to 4 years were conducted. Based…

  8. No regularity singularities exist at points of general relativistic shock wave interaction between shocks from different characteristic families.

    PubMed

    Reintjes, Moritz; Temple, Blake

    2015-05-08

    We give a constructive proof that coordinate transformations exist which raise the regularity of the gravitational metric tensor from C 0,1 to C 1,1 in a neighbourhood of points of shock wave collision in general relativity. The proof applies to collisions between shock waves coming from different characteristic families, in spherically symmetric spacetimes. Our result here implies that spacetime is locally inertial and corrects an error in our earlier Proc. R. Soc. A publication, which led us to the false conclusion that such coordinate transformations, which smooth the metric to C 1,1 , cannot exist. Thus, our result implies that regularity singularities (a type of mild singularity introduced in our Proc. R. Soc. A paper) do not exist at points of interacting shock waves from different families in spherically symmetric spacetimes. Our result generalizes Israel's celebrated 1966 paper to the case of such shock wave interactions but our proof strategy differs fundamentally from that used by Israel and is an extension of the strategy outlined in our original Proc. R. Soc. A publication. Whether regularity singularities exist in more complicated shock wave solutions of the Einstein-Euler equations remains open.

  9. No regularity singularities exist at points of general relativistic shock wave interaction between shocks from different characteristic families

    PubMed Central

    Reintjes, Moritz; Temple, Blake

    2015-01-01

    We give a constructive proof that coordinate transformations exist which raise the regularity of the gravitational metric tensor from C0,1 to C1,1 in a neighbourhood of points of shock wave collision in general relativity. The proof applies to collisions between shock waves coming from different characteristic families, in spherically symmetric spacetimes. Our result here implies that spacetime is locally inertial and corrects an error in our earlier Proc. R. Soc. A publication, which led us to the false conclusion that such coordinate transformations, which smooth the metric to C1,1, cannot exist. Thus, our result implies that regularity singularities (a type of mild singularity introduced in our Proc. R. Soc. A paper) do not exist at points of interacting shock waves from different families in spherically symmetric spacetimes. Our result generalizes Israel's celebrated 1966 paper to the case of such shock wave interactions but our proof strategy differs fundamentally from that used by Israel and is an extension of the strategy outlined in our original Proc. R. Soc. A publication. Whether regularity singularities exist in more complicated shock wave solutions of the Einstein–Euler equations remains open. PMID:27547092

  10. (2-{[2-(diphenyl-phosphino)phen-yl]thio}-phen-yl)diphenyl-phosphine sulfide.

    PubMed

    Alvarez-Larena, Angel; Martinez-Cuevas, Francisco J; Flor, Teresa; Real, Juli

    2012-11-01

    In the title compound, C(36)H(28)P(2)S(2), the dihedral angle between the central benzene rings is 66.95 (13)°. In the crystal, molecules are linked via C(ar)-H⋯π and π-π inter-actions [shortest centroid-centroid distance between benzene rings = 3.897 (2) Å].

  11. Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Tian, Xin; Pan, Le-chun

    2014-07-01

    Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.

  12. Swinger RNAs with sharp switches between regular transcription and transcription systematically exchanging ribonucleotides: Case studies.

    PubMed

    Seligmann, Hervé

    2015-09-01

    During RNA transcription, DNA nucleotides A,C,G, T are usually matched by ribonucleotides A, C, G and U. However occasionally, this rule does not apply: transcript-DNA homologies are detectable only assuming systematic exchanges between ribonucleotides. Nine symmetric (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric (X ↔ Y ↔ Z, e.g. A ↔ C ↔ G) exchanges exist, called swinger transcriptions. Putatively, polymerases occasionally stabilize in unspecified swinger conformations, possibly similar to transient conformations causing punctual misinsertions. This predicts chimeric transcripts, part regular, part swinger-transformed, reflecting polymerases switching to swinger polymerization conformation(s). Four chimeric Genbank transcripts (three from human mitochondrion and one murine cytosolic) are described here: (a) the 5' and 3' extremities reflect regular polymerization, the intervening sequence exchanges systematically between ribonucleotides (swinger rule G ↔ U, transcript (1), with sharp switches between regular and swinger sequences; (b) the 5' half is 'normal', the 3' half systematically exchanges ribonucleotides (swinger rule C ↔ G, transcript (2), with an intercalated sequence lacking homology; (c) the 3' extremity fits A ↔ G exchanges (10% of transcript length), the 5' half follows regular transcription; the intervening region seems a mix of regular and A ↔ G transcriptions (transcript 3); (d) murine cytosolic transcript 4 switches to A ↔ U + C ↔ G, and is fused with A ↔ U + C ↔ G swinger transformed precursor rRNA. In (c), each concomitant transcript 5' and 3' extremities match opposite genome strands. Transcripts 3 and 4 combine transcript fusions with partial swinger transcriptions. Occasional (usually sharp) switches between regular and swinger transcriptions reveal greater coding potential than detected until now, suggest stable polymerase swinger conformations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Geospatial cross-correlation analysis of Oklahoma earthquakes and saltwater disposal volume 2011 - 2016

    NASA Astrophysics Data System (ADS)

    Pollyea, R.; Mohammadi, N.; Taylor, J. E.

    2017-12-01

    The annual earthquake rate in Oklahoma increased dramatically between 2009 and 2016, owing in large part to the rapid proliferation of salt water disposal wells associated with unconventional oil and gas recovery. This study presents a geospatial analysis of earthquake occurrence and SWD injection volume within a 68,420 km2 area in north-central Oklahoma between 2011 and 2016. The spatial co-variability of earthquake occurrence and SWD injection volume is analyzed for each year of the study by calculating the geographic centroid for both earthquake epicenter and volume-weighted well location. In addition, the spatial cross correlation between earthquake occurrence and SWD volume is quantified by calculating the cross semivariogram annually for a 9.6 km × 9.6 km (6 mi × 6 mi) grid over the study area. Results from these analyses suggest that the relationship between volume-weighted well centroids and earthquake centroids generally follow pressure diffusion space-time scaling, and the volume-weighted well centroid predicts the geographic earthquake centroid within a 1σ radius of gyration. The cross semivariogram calculations show that SWD injection volume and earthquake occurrence are spatially cross correlated between 2014 and 2016. These results also show that the strength of cross correlation decreased from 2015 to 2016; however, the cross correlation length scale remains unchanged at 125 km. This suggests that earthquake mitigation efforts have been moderately successful in decreasing the strength of cross correlation between SWD volume and earthquake occurrence near-field, but the far-field contribution of SWD injection volume to earthquake occurrence remains unaffected.

  14. The strengths and limitations of effective centroid force models explored by studying isotopic effects in liquid water

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Li, Jicun; Li, Xin-Zheng; Wang, Feng

    2018-05-01

    The development of effective centroid potentials (ECPs) is explored with both the constrained-centroid and quasi-adiabatic force matching using liquid water as a test system. A trajectory integrated with the ECP is free of statistical noises that would be introduced when the centroid potential is approximated on the fly with a finite number of beads. With the reduced cost of ECP, challenging experimental properties can be studied in the spirit of centroid molecular dynamics. The experimental number density of H2O is 0.38% higher than that of D2O. With the ECP, the H2O number density is predicted to be 0.42% higher, when the dispersion term is not refit. After correction of finite size effects, the diffusion constant of H2O is found to be 21% higher than that of D2O, which is in good agreement with the 29.9% higher diffusivity for H2O observed experimentally. Although the ECP is also able to capture the redshifts of both the OH and OD stretching modes in liquid water, there are a number of properties that a classical simulation with the ECP will not be able to recover. For example, the heat capacities of H2O and D2O are predicted to be almost identical and higher than the experimental values. Such a failure is simply a result of not properly treating quantized vibrational energy levels when the trajectory is propagated with classical mechanics. Several limitations of the ECP based approach without bead population reconstruction are discussed.

  15. Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, M. Y.

    1986-01-01

    This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.

  16. Temporal variations in the position of the heliospheric equator

    NASA Astrophysics Data System (ADS)

    Obridko, V. N.; Shelting, B. D.

    2008-08-01

    It is shown that the centroid of the heliospheric equator undergoes quasi-periodic oscillations. During the minimum of the 11-year cycle, the centroid shifts southwards (the so-called bashful-ballerina effect). The direction of the shift reverses during the solar maximum. The solar quadrupole is responsible for this effect. The shift is compared with the tilt of the heliospheric current sheet.

  17. Automatic localization of the left ventricular blood pool centroid in short axis cardiac cine MR images.

    PubMed

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A

    2018-06-01

    In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.

  18. Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE

    DOE PAGES

    Borges, Nicholas; Losko, Adrian; Vogel, Sven

    2018-02-13

    The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. In this work, we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolvedmore » neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution.« less

  19. Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borges, Nicholas; Losko, Adrian; Vogel, Sven

    The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. In this work, we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolvedmore » neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution.« less

  20. Automatic detection and quantitative analysis of cells in the mouse primary motor cortex

    NASA Astrophysics Data System (ADS)

    Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui

    2014-09-01

    Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.

  1. Star centroiding error compensation for intensified star sensors.

    PubMed

    Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun

    2016-12-26

    A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.

  2. Fusing Image Data for Calculating Position of an Object

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Cheng, Yang; Liebersbach, Robert; Trebi-Ollenu, Ashitey

    2007-01-01

    A computer program has been written for use in maintaining the calibration, with respect to the positions of imaged objects, of a stereoscopic pair of cameras on each of the Mars Explorer Rovers Spirit and Opportunity. The program identifies and locates a known object in the images. The object in question is part of a Moessbauer spectrometer located at the tip of a robot arm, the kinematics of which are known. In the program, the images are processed through a module that extracts edges, combines the edges into line segments, and then derives ellipse centroids from the line segments. The images are also processed by a feature-extraction algorithm that performs a wavelet analysis, then performs a pattern-recognition operation in the wavelet-coefficient space to determine matches to a texture feature measure derived from the horizontal, vertical, and diagonal coefficients. The centroids from the ellipse finder and the wavelet feature matcher are then fused to determine co-location. In the event that a match is found, the centroid (or centroids if multiple matches are present) is reported. If no match is found, the process reports the results of the analyses for further examination by human experts.

  3. Fostering Transformative Learning in Non-Formal Settings: Farmer-Field Schools in East Africa

    ERIC Educational Resources Information Center

    Taylor, Edward W.; Duveskog, Deborah; Friis-Hansen, Esbern

    2012-01-01

    The purpose of this study was to explore the practice of Farmer-Field Schools (FFS) theoretically framed from the perspective of transformative learning theory and non-formal education (NFE). Farmer-Field Schools are community-led NFE programs that provide a platform where farmers meet regularly to study the "how and why" of farming and…

  4. Decentralised consensus-based formation tracking of multiple differential drive robots

    NASA Astrophysics Data System (ADS)

    Chu, Xing; Peng, Zhaoxia; Wen, Guoguang; Rahmani, Ahmed

    2017-11-01

    This article investigates the control problem for formation tracking of multiple nonholonomic robots under distributed manner which means each robot only needs local information exchange. A class of general state and input transform is introduced to convert the formation-tracking issue of multi-robot systems into the consensus-like problem with time-varying reference. The distributed observer-based protocol with nonlinear dynamics is developed for each robot to achieve the consensus tracking of the new system, which namely means a group of nonholonomic mobile robots can form the desired formation configuration with its centroid moving along the predefined reference trajectory. The finite-time stability of observer and control law is analysed rigorously by using the Lyapunov direct method, algebraic graph theory and matrix analysis. Numerical examples are finally provided to illustrate the effectiveness of the theory results proposed in this paper.

  5. Cherry recognition in natural environment based on the vision of picking robot

    NASA Astrophysics Data System (ADS)

    Zhang, Qirong; Chen, Shanxiong; Yu, Tingzhong; Wang, Yan

    2017-04-01

    In order to realize the automatic recognition of cherry in the natural environment, this paper designed a robot vision system recognition method. The first step of this method is to pre-process the cherry image by median filtering. The second step is to identify the colour of the cherry through the 0.9R-G colour difference formula, and then use the Otsu algorithm for threshold segmentation. The third step is to remove noise by using the area threshold. The fourth step is to remove the holes in the cherry image by morphological closed and open operation. The fifth step is to obtain the centroid and contour of cherry by using the smallest external rectangular and the Hough transform. Through this recognition process, we can successfully identify 96% of the cherry without blocking and adhesion.

  6. COSMIC SHEAR MEASUREMENT USING AUTO-CONVOLVED IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xiangchong; Zhang, Jun, E-mail: betajzhang@sjtu.edu.cn

    2016-10-20

    We study the possibility of using quadrupole moments of auto-convolved galaxy images to measure cosmic shear. The autoconvolution of an image corresponds to the inverse Fourier transformation of its power spectrum. The new method has the following advantages: the smearing effect due to the point-spread function (PSF) can be corrected by subtracting the quadrupole moments of the auto-convolved PSF; the centroid of the auto-convolved image is trivially identified; the systematic error due to noise can be directly removed in Fourier space; the PSF image can also contain noise, the effect of which can be similarly removed. With a large ensemblemore » of simulated galaxy images, we show that the new method can reach a sub-percent level accuracy under general conditions, albeit with increasingly large stamp size for galaxies of less compact profiles.« less

  7. Juvenile Offenders: Characteristics and Reasons Why They Drop Out of Regular Education, in Valparaiso Region

    ERIC Educational Resources Information Center

    Muñoz-Salazar, Patricia; Acuña-Collado, Violeta

    2016-01-01

    In Chile, adult education has drastically transformed in recent decades, both in the curriculum reform and in the age of their students. Today, users of this education are no longer working adults who need to complete their studies to work, but they are mostly young teenagers who dropped out of regular education. The problem is that because their…

  8. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  9. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  10. Research of centroiding algorithms for extended and elongated spot of sodium laser guide star

    NASA Astrophysics Data System (ADS)

    Shao, Yayun; Zhang, Yudong; Wei, Kai

    2016-10-01

    Laser guide stars (LGSs) increase the sky coverage of astronomical adaptive optics systems. But spot array obtained by Shack-Hartmann wave front sensors (WFSs) turns extended and elongated, due to the thickness and size limitation of sodium LGS, which affects the accuracy of the wave front reconstruction algorithm. In this paper, we compared three different centroiding algorithms , the Center-of-Gravity (CoG), weighted CoG (WCoG) and Intensity Weighted Centroid (IWC), as well as those accuracies for various extended and elongated spots. In addition, we compared the reconstructed image data from those three algorithms with theoretical results, and proved that WCoG and IWC are the best wave front reconstruction algorithms for extended and elongated spot among all the algorithms.

  11. Trajectory data privacy protection based on differential privacy mechanism

    NASA Astrophysics Data System (ADS)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  12. 3-Phenyl-6-(2-pyrid-yl)-1,2,4,5-tetra-zine.

    PubMed

    Chartrand, Daniel; Laverdière, François; Hanan, Garry

    2007-12-06

    The title compound, C(13)H(9)N(5), is the first asymmetric diaryl-1,2,4,5-tetra-zine to be crystallographically characterized. We have been inter-ested in this motif for incorporation into supra-molecular assemblies based on coordination chemistry. The solid state structure shows a centrosymmetric mol-ecule, forcing a positional disorder of the terminal phenyl and pyridyl rings. The mol-ecule is completely planar, unusual for aromatic rings with N atoms in adjacent ortho positions. The stacking observed is very common in diaryl-tetra-zines and is dominated by π stacking [centroid-to-centroid distance between the tetrazine ring and the aromatic ring of an adjacent molecule is 3.6 Å, perpendicular (centroid-to-plane) distance of about 3.3 Å].

  13. The interpretation of crustal dynamics data in terms of plate motions and regional deformation near plate boundaries

    NASA Astrophysics Data System (ADS)

    Solomon, Sean C.

    During our participation in the NASA Crustal Dynamics Project under NASA contract NAS-27339 and grant NAG5-814 for the period 1982-1991, we published or submitted for publication 30 research papers and 52 abstracts of presentations at scientific meetings. In addition, five M.I.T. Ph.D. students (Eric Bergman, Steven Bratt, Dan Davis, Jeanne Sauber, Anne Sheehan) were supported wholly or in part by this project during their thesis research. Highlights of our research progress during this period include the following: application of geodetic data to determine rates of strain in the Mojave block and in central California and to clarify the relation of such strain to the San Andreas fault and Pacific-North American plate motions; application of geodetic data to infer post seismic deformation associated with large earthquakes in the Imperial Valley, Hebgen Lake, Argentina, and Chile; determination of the state of stress in oceanic lithosphere from a systematic study of the centroid depths and source mechanisms of oceanic intraplate earthquakes; development of models for the state of stress in young oceanic regions arising from the differential cooling of the lithosphere; determination of the depth extent and rupture characteristics of oceanic transform earthquakes; improved determination of earthquake slip vectors in the Gulf of California, an important data set for the estimation of Pacific-North American plate motions; development of models for the state of stress and mechanics of fold-and-thrust belts and accretionary wedges; development of procedures to invert geoid height, residual bathymetry, and differential body wave travel time residuals for lateral variations in the characteristic temperature and bulk composition of the oceanic upper mantle; and initial GPS measurements of crustal deformation associated with the Imperial-Cerro Prieto fault system in southern California and northern Mexico. Full descriptions of the research conducted on these topics may be found in the Semi-Annual status Reports submitted regularly to NASA over the course of this project and in the publications listed.

  14. The interpretation of crustal dynamics data in terms of plate motions and regional deformation near plate boundaries

    NASA Technical Reports Server (NTRS)

    Solomon, Sean C.

    1991-01-01

    During our participation in the NASA Crustal Dynamics Project under NASA contract NAS-27339 and grant NAG5-814 for the period 1982-1991, we published or submitted for publication 30 research papers and 52 abstracts of presentations at scientific meetings. In addition, five M.I.T. Ph.D. students (Eric Bergman, Steven Bratt, Dan Davis, Jeanne Sauber, Anne Sheehan) were supported wholly or in part by this project during their thesis research. Highlights of our research progress during this period include the following: application of geodetic data to determine rates of strain in the Mojave block and in central California and to clarify the relation of such strain to the San Andreas fault and Pacific-North American plate motions; application of geodetic data to infer post seismic deformation associated with large earthquakes in the Imperial Valley, Hebgen Lake, Argentina, and Chile; determination of the state of stress in oceanic lithosphere from a systematic study of the centroid depths and source mechanisms of oceanic intraplate earthquakes; development of models for the state of stress in young oceanic regions arising from the differential cooling of the lithosphere; determination of the depth extent and rupture characteristics of oceanic transform earthquakes; improved determination of earthquake slip vectors in the Gulf of California, an important data set for the estimation of Pacific-North American plate motions; development of models for the state of stress and mechanics of fold-and-thrust belts and accretionary wedges; development of procedures to invert geoid height, residual bathymetry, and differential body wave travel time residuals for lateral variations in the characteristic temperature and bulk composition of the oceanic upper mantle; and initial GPS measurements of crustal deformation associated with the Imperial-Cerro Prieto fault system in southern California and northern Mexico. Full descriptions of the research conducted on these topics may be found in the Semi-Annual status Reports submitted regularly to NASA over the course of this project and in the publications listed.

  15. Nonrigid mammogram registration using mutual information

    NASA Astrophysics Data System (ADS)

    Wirth, Michael A.; Narhan, Jay; Gray, Derek W. S.

    2002-05-01

    Of the papers dealing with the task of mammogram registration, the majority deal with the task by matching corresponding control-points derived from anatomical landmark points. One of the caveats encountered when using pure point-matching techniques is their reliance on accurately extracted anatomical features-points. This paper proposes an innovative approach to matching mammograms which combines the use of a similarity-measure and a point-based spatial transformation. Mutual information is a cost-function used to determine the degree of similarity between the two mammograms. An initial rigid registration is performed to remove global differences and bring the mammograms into approximate alignment. The mammograms are then subdivided into smaller regions and each of the corresponding subimages is matched independently using mutual information. The centroids of each of the matched subimages are then used as corresponding control-point pairs in association with the Thin-Plate Spline radial basis function. The resulting spatial transformation generates a nonrigid match of the mammograms. The technique is illustrated by matching mammograms from the MIAS mammogram database. An experimental comparison is made between mutual information incorporating purely rigid behavior, and that incorporating a more nonrigid behavior. The effectiveness of the registration process is evaluated using image differences.

  16. Enhanced K-means clustering with encryption on cloud

    NASA Astrophysics Data System (ADS)

    Singh, Iqjot; Dwivedi, Prerna; Gupta, Taru; Shynu, P. G.

    2017-11-01

    This paper tries to solve the problem of storing and managing big files over cloud by implementing hashing on Hadoop in big-data and ensure security while uploading and downloading files. Cloud computing is a term that emphasis on sharing data and facilitates to share infrastructure and resources.[10] Hadoop is an open source software that gives us access to store and manage big files according to our needs on cloud. K-means clustering algorithm is an algorithm used to calculate distance between the centroid of the cluster and the data points. Hashing is a algorithm in which we are storing and retrieving data with hash keys. The hashing algorithm is called as hash function which is used to portray the original data and later to fetch the data stored at the specific key. [17] Encryption is a process to transform electronic data into non readable form known as cipher text. Decryption is the opposite process of encryption, it transforms the cipher text into plain text that the end user can read and understand well. For encryption and decryption we are using Symmetric key cryptographic algorithm. In symmetric key cryptography are using DES algorithm for a secure storage of the files. [3

  17. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  18. On the gestalt concept.

    PubMed

    Breidbach, Olaf; Jost, Jürgen

    2006-08-01

    We define a gestalt as the invariants of a collection of patterns that can mutually be transformed into each other through a class of transformations encoded by, or conversely, determining that gestalt. The class of these transformations needs to satisfy structural regularities like the ones of the mathematical structure of a group. This makes an analysis of a gestalt possible in terms of relations between its representing patterns. While the gestalt concept has its origins in cognitive psychology, it has also important implications for morphology.

  19. Automated Slicing for a Multi-Axis Metal Deposition System (Preprint)

    DTIC Science & Technology

    2006-09-01

    experimented with different materials like H13 tool steel to build the part. Following the same slicing and scanning toolpath result, there is a geometric...and analysis tool -centroidal axis. Similar to medial axis, it contains geometry and topological information but is significantly computationally...geometry reasoning and analysis tool -centroidal axis. Similar to medial axis, it contains geometry and topological information but is significantly

  20. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  1. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H; Barbee, D; Wang, W

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CTmore » for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.« less

  2. State of Metropolitan America: On the Front Lines of Demographic Transformation

    ERIC Educational Resources Information Center

    Brookings Institution, 2010

    2010-01-01

    This report marks the inaugural edition of a regular summary report in Brookings' "State of Metropolitan America" series. It focuses on the major demographic forces transforming the nation and large metropolitan areas in the 2000s. In this sense, it previews what people will learn from the results of the 2010 census, as well as supplements those…

  3. Study of preparation of TiB{sub 2} by TiC in Al melts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding Haimin; Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061; Liu Xiangfa, E-mail: xfliu@sdu.edu.cn

    2012-01-15

    TiB{sub 2} particles are prepared by TiC in Al melts and the characteristics of them are studied. It is found that TiC particles are unstable when boron exists in Al melts with high temperature and will transform to TiB{sub 2} and Al{sub 4}C{sub 3}. Most of the synthesized TiB{sub 2} particles are regular hexagonal prisms with submicron size. The diameter of the undersurfaces of these prisms is ranging from 200 nm to 1 {mu}m and the height is ranging from 100 nm to 300 nm. It is considered that controlling the transformation from TiC to TiB{sub 2} is an effectivemore » method to prepare small and uniform TiB{sub 2} particles. - Highlights: Black-Right-Pointing-Pointer TiC can easily transform into TiB{sub 2} in Al melts. Black-Right-Pointing-Pointer TiB{sub 2} formed by TiC will grow into regular hexagonal prisms with submicron size. Black-Right-Pointing-Pointer Controlling the transformation from TiC to TiB{sub 2} is an effective method to prepare small and uniform TiB{sub 2} particles.« less

  4. Regular Mechanical Transformation of Rotations Into Translations: Part 1. Kinematic Analysis and Definition of the Basic Characteristics

    NASA Astrophysics Data System (ADS)

    Abadjieva, Emilia; Abadjiev, Valentin

    2017-06-01

    The science that study the processes of motions transformation upon a preliminary defined law between non-coplanar axes (in general case) axes of rotations or axis of rotation and direction of rectilinear translation by three-link mechanisms, equipped with high kinematic joints, can be treated as an independent branch of Applied Mechanics. It deals with mechanical behaviour of these multibody systems in relation to the kinematic and geometric characteristics of the elements of the high kinematic joints, which form them. The object of study here is the process of regular transformation of rotation into translation. The developed mathematical model is subjected to the defined task for studying the sliding velocity vector function at the contact point from the surfaces elements of arbitrary high kinematic joints. The main kinematic characteristics of the studied type motions transformation (kinematic cylinders on level, kinematic relative helices (helical conoids) and kinematic pitch configurations) are defined on the bases of the realized analysis. These features expand the theoretical knowledge, which is the objective of the gearing theory. They also complement the system of kinematic and geometric primitives, that form the mathematical model for synthesis of spatial rack mechanisms.

  5. THE SLOAN DIGITAL SKY SURVEY REVERBERATION MAPPING PROJECT: BIASES IN z  > 1.46 REDSHIFTS DUE TO QUASAR DIVERSITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denney, K. D.; Peterson, B. M.; Horne, Keith

    We use the coadded spectra of 32 epochs of Sloan Digital Sky Survey (SDSS) Reverberation Mapping Project observations of 482 quasars with z  > 1.46 to highlight systematic biases in the SDSS- and Baryon Oscillation Spectroscopic Survey (BOSS)-pipeline redshifts due to the natural diversity of quasar properties. We investigate the characteristics of this bias by comparing the BOSS-pipeline redshifts to an estimate from the centroid of He ii λ 1640. He ii has a low equivalent width but is often well-defined in high-S/N spectra, does not suffer from self-absorption, and has a narrow component which, when present (the case for aboutmore » half of our sources), produces a redshift estimate that, on average, is consistent with that determined from [O ii] to within the He ii and [O ii] centroid measurement uncertainties. The large redshift differences of ∼1000 km s{sup −1}, on average, between the BOSS-pipeline and He ii-centroid redshifts, suggest there are significant biases in a portion of BOSS quasar redshift measurements. Adopting the He ii-based redshifts shows that C iv does not exhibit a ubiquitous blueshift for all quasars, given the precision probed by our measurements. Instead, we find a distribution of C iv-centroid blueshifts across our sample, with a dynamic range that (i) is wider than that previously reported for this line, and (ii) spans C iv centroids from those consistent with the systemic redshift to those with significant blueshifts of thousands of kilometers per second. These results have significant implications for measurement and use of high-redshift quasar properties and redshifts, and studies based thereon.« less

  6. Spotting stellar activity cycles in Gaia astrometry

    NASA Astrophysics Data System (ADS)

    Morris, Brett M.; Agol, Eric; Davenport, James R. A.; Hawley, Suzanne L.

    2018-06-01

    Astrometry from Gaia will measure the positions of stellar photometric centroids to unprecedented precision. We show that the precision of Gaia astrometry is sufficient to detect starspot-induced centroid jitter for nearby stars in the Tycho-Gaia Astrometric Solution (TGAS) sample with magnetic activity similar to the young G-star KIC 7174505 or the active M4 dwarf GJ 1243, but is insufficient to measure centroid jitter for stars with Sun-like spot distributions. We simulate Gaia observations of stars with 10 year activity cycles to search for evidence of activity cycles, and find that Gaia astrometry alone likely cannot detect activity cycles for stars in the TGAS sample, even if they have spot distributions like KIC 7174505. We review the activity of the nearby low-mass stars in the TGAS sample for which we anticipate significant detections of spot-induced jitter.

  7. 3-Phenyl-6-(2-pyrid­yl)-1,2,4,5-tetra­zine

    PubMed Central

    Chartrand, Daniel; Laverdière, François; Hanan, Garry

    2008-01-01

    The title compound, C13H9N5, is the first asymmetric diaryl-1,2,4,5-tetra­zine to be crystallographically characterized. We have been inter­ested in this motif for incorporation into supra­molecular assemblies based on coordination chemistry. The solid state structure shows a centrosymmetric mol­ecule, forcing a positional disorder of the terminal phenyl and pyridyl rings. The mol­ecule is completely planar, unusual for aromatic rings with N atoms in adjacent ortho positions. The stacking observed is very common in diaryl­tetra­zines and is dominated by π stacking [centroid-to-centroid distance between the tetrazine ring and the aromatic ring of an adjacent molecule is 3.6 Å, perpendicular (centroid-to-plane) distance of about 3.3 Å]. PMID:21200916

  8. Use of incomplete energy recovery for the energy compression of large energy spread charged particle beams

    DOEpatents

    Douglas, David R [Newport News, VA; Benson, Stephen V [Yorktown, VA

    2007-01-23

    A method of energy recovery for RF-base linear charged particle accelerators that allows energy recovery without large relative momentum spread of the particle beam involving first accelerating a waveform particle beam having a crest and a centroid with an injection energy E.sub.o with the centroid of the particle beam at a phase offset f.sub.o from the crest of the accelerating waveform to an energy E.sub.full and then recovering the beam energy centroid a phase f.sub.o+Df relative to the crest of the waveform particle beam such that (E.sub.full-E.sub.o)(1+cos(f.sub.o+Df))>dE/2 wherein dE=the full energy spread, dE/2=the full energy half spread and Df=the wave form phase distance.

  9. A new wavelet transform to sparsely represent cortical current densities for EEG/MEG inverse problems.

    PubMed

    Liao, Ke; Zhu, Min; Ding, Lei

    2013-08-01

    The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Derivation of the Statistical Distribution of the Mass Peak Centroids of Mass Spectrometers Employing Analog-to-Digital Converters and Electron Multipliers

    DOE PAGES

    Ipsen, Andreas

    2017-02-03

    Here, the mass peak centroid is a quantity that is at the core of mass spectrometry (MS). However, despite its central status in the field, models of its statistical distribution are often chosen quite arbitrarily and without attempts at establishing a proper theoretical justification for their use. Recent work has demonstrated that for mass spectrometers employing analog-to-digital converters (ADCs) and electron multipliers, the statistical distribution of the mass peak intensity can be described via a relatively simple model derived essentially from first principles. Building on this result, the following article derives the corresponding statistical distribution for the mass peak centroidsmore » of such instruments. It is found that for increasing signal strength, the centroid distribution converges to a Gaussian distribution whose mean and variance are determined by physically meaningful parameters and which in turn determine bias and variability of the m/z measurements of the instrument. Through the introduction of the concept of “pulse-peak correlation”, the model also elucidates the complicated relationship between the shape of the voltage pulses produced by the preamplifier and the mean and variance of the centroid distribution. The predictions of the model are validated with empirical data and with Monte Carlo simulations.« less

  11. Optimum threshold selection method of centroid computation for Gaussian spot

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; Wang, Caixia

    2015-10-01

    Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.

  12. Structure and seasonal variations of the nocturnal mesospheric K layer at Arecibo

    NASA Astrophysics Data System (ADS)

    Yue, Xianchang; Friedman, Jonathan S.; Wu, Xiongbin; Zhou, Qihou H.

    2017-07-01

    We present the seasonal variations of the nocturnal mesospheric potassium (K) layer at Arecibo, Puerto Rico (18.35°N, 66.75°W) from 160 nights of K Doppler lidar observations between December 2003 and January 2010, during which the solar activity is mostly low. The background temperature is also measured simultaneously by the lidar and shows a strong semiannual oscillation with maxima occurring during equinoxes at all altitudes. The annual mean K density profile is approximately Gaussian with a peak altitude of 91.7 km. The K column abundance and the centroid height have strong semiannual variations, with maxima at the solstices. Both parameters are negatively correlated to the mean background temperature with a correlation coefficient < -0.5. The root-mean-square (RMS) width has a distinct annual oscillation with the largest width occurring in May. The seasonal variation of the centroid height is similar to that of the Fe layer at the same site. The seasonal temperature variation indicates significant enhanced wave-induced downward transport for both species during spring and autumn. This explains the metal layer centroid height and column abundance variations at Arecibo and provides a general mechanism to account for the seasonal variations in the centroid height of all metal species measured at low-latitude and midlatitude sites.

  13. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    PubMed

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  14. Statistical Properties of Line Centroid Velocity Increments in the rho Ophiuchi Cloud

    NASA Technical Reports Server (NTRS)

    Lis, D. C.; Keene, Jocelyn; Li, Y.; Phillips, T. G.; Pety, J.

    1998-01-01

    We present a comparison of histograms of CO (2-1) line centroid velocity increments in the rho Ophiuchi molecular cloud with those computed for spectra synthesized from a three-dimensional, compressible, but non-starforming and non-gravitating hydrodynamic simulation. Histograms of centroid velocity increments in the rho Ophiuchi cloud show clearly non-Gaussian wings, similar to those found in histograms of velocity increments and derivatives in experimental studies of laboratory and atmospheric flows, as well as numerical simulations of turbulence. The magnitude of these wings increases monotonically with decreasing separation, down to the angular resolution of the data. This behavior is consistent with that found in the phase of the simulation which has most of the properties of incompressible turbulence. The time evolution of the magnitude of the non-Gaussian wings in the histograms of centroid velocity increments in the simulation is consistent with the evolution of the vorticity in the flow. However, we cannot exclude the possibility that the wings are associated with the shock interaction regions. Moreover, in an active starforming region like the rho Ophiuchi cloud, the effects of shocks may be more important than in the simulation. However, being able to identify shock interaction regions in the interstellar medium is also important, since numerical simulations show that vorticity is generated in shock interactions.

  15. Derivation of the Statistical Distribution of the Mass Peak Centroids of Mass Spectrometers Employing Analog-to-Digital Converters and Electron Multipliers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ipsen, Andreas

    Here, the mass peak centroid is a quantity that is at the core of mass spectrometry (MS). However, despite its central status in the field, models of its statistical distribution are often chosen quite arbitrarily and without attempts at establishing a proper theoretical justification for their use. Recent work has demonstrated that for mass spectrometers employing analog-to-digital converters (ADCs) and electron multipliers, the statistical distribution of the mass peak intensity can be described via a relatively simple model derived essentially from first principles. Building on this result, the following article derives the corresponding statistical distribution for the mass peak centroidsmore » of such instruments. It is found that for increasing signal strength, the centroid distribution converges to a Gaussian distribution whose mean and variance are determined by physically meaningful parameters and which in turn determine bias and variability of the m/z measurements of the instrument. Through the introduction of the concept of “pulse-peak correlation”, the model also elucidates the complicated relationship between the shape of the voltage pulses produced by the preamplifier and the mean and variance of the centroid distribution. The predictions of the model are validated with empirical data and with Monte Carlo simulations.« less

  16. Precision targeting in guided munition using IR sensor and MmW radar

    NASA Astrophysics Data System (ADS)

    Sreeja, S.; Hablani, H. B.; Arya, H.

    2015-10-01

    Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a Precision Guided Munition(PGM) equipped with an infrared sensor and a millimeter wave radar [IR and MmW, for short]. Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov process. To estimate the target location on the ground and the line-of-sight rate to intercept it an Extended Kalman Filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The line-of-sight angle measurement from the infrared seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including by image processing delays is 1:45m.

  17. Precision targeting in guided munition using infrared sensor and millimeter wave radar

    NASA Astrophysics Data System (ADS)

    Sulochana, Sreeja; Hablani, Hari B.; Arya, Hemendra

    2016-07-01

    Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a precision guided munition equipped with an infrared (IR) sensor and a millimeter wave radar (MmW). Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov processes. To estimate the target location on the ground and the line-of-sight (LOS) rate to intercept it, an extended Kalman filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The LOS angle measurement from the IR seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including image processing delays is 1.45 m.

  18. The effect of regularization in motion compensated PET image reconstruction: a realistic numerical 4D simulation study.

    PubMed

    Tsoumpas, C; Polycarpou, I; Thielemans, K; Buerger, C; King, A P; Schaeffter, T; Marsden, P K

    2013-03-21

    Following continuous improvement in PET spatial resolution, respiratory motion correction has become an important task. Two of the most common approaches that utilize all detected PET events to motion-correct PET data are the reconstruct-transform-average method (RTA) and motion-compensated image reconstruction (MCIR). In RTA, separate images are reconstructed for each respiratory frame, subsequently transformed to one reference frame and finally averaged to produce a motion-corrected image. In MCIR, the projection data from all frames are reconstructed by including motion information in the system matrix so that a motion-corrected image is reconstructed directly. Previous theoretical analyses have explained why MCIR is expected to outperform RTA. It has been suggested that MCIR creates less noise than RTA because the images for each separate respiratory frame will be severely affected by noise. However, recent investigations have shown that in the unregularized case RTA images can have fewer noise artefacts, while MCIR images are more quantitatively accurate but have the common salt-and-pepper noise. In this paper, we perform a realistic numerical 4D simulation study to compare the advantages gained by including regularization within reconstruction for RTA and MCIR, in particular using the median-root-prior incorporated in the ordered subsets maximum a posteriori one-step-late algorithm. In this investigation we have demonstrated that MCIR with proper regularization parameters reconstructs lesions with less bias and root mean square error and similar CNR and standard deviation to regularized RTA. This finding is reproducible for a variety of noise levels (25, 50, 100 million counts), lesion sizes (8 mm, 14 mm diameter) and iterations. Nevertheless, regularized RTA can also be a practical solution for motion compensation as a proper level of regularization reduces both bias and mean square error.

  19. Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.

    PubMed

    Crăciun, Cora

    2014-08-01

    CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Sparse Poisson noisy image deblurring.

    PubMed

    Carlavan, Mikael; Blanc-Féraud, Laure

    2012-04-01

    Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.

  1. Smart grid initialization reduces the computational complexity of multi-objective image registration based on a dual-dynamic transformation model to account for large anatomical differences

    NASA Astrophysics Data System (ADS)

    Bosman, Peter A. N.; Alderliesten, Tanja

    2016-03-01

    We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.

  2. Enhanced transformation of incidentally learned knowledge into explicit memory by dopaminergic modulation.

    PubMed

    Clos, Mareike; Sommer, Tobias; Schneider, Signe L; Rose, Michael

    2018-01-01

    During incidental learning statistical regularities are extracted from the environment without the intention to learn. Acquired implicit memory of these regularities can affect behavior in the absence of awareness. However, conscious insight in the underlying regularities can also develop during learning. Such emergence of explicit memory is an important learning mechanism that is assumed to involve prediction errors in the striatum and to be dopamine-dependent. Here we directly tested this hypothesis by manipulating dopamine levels during incidental learning in a modified serial reaction time task (SRTT) featuring a hidden regular sequence of motor responses in a placebo-controlled between-group study. Awareness for the sequential regularity was subsequently assessed using cued generation and additionally verified using free recall. The results demonstrated that dopaminergic modulation nearly doubled the amount of explicit sequence knowledge emerged during learning in comparison to the placebo group. This strong effect clearly argues for a causal role of dopamine-dependent processing for the development of awareness for sequential regularities during learning.

  3. PyCCF: Python Cross Correlation Function for reverberation mapping studies

    NASA Astrophysics Data System (ADS)

    Sun, Mouyuan; Grier, C. J.; Peterson, B. M.

    2018-05-01

    PyCCF emulates a Fortran program written by B. Peterson for use with reverberation mapping. The code cross correlates two light curves that are unevenly sampled using linear interpolation and measures the peak and centroid of the cross-correlation function. In addition, it is possible to run Monto Carlo iterations using flux randomization and random subset selection (RSS) to produce cross-correlation centroid distributions to estimate the uncertainties in the cross correlation results.

  4. Immune Centroids Over-Sampling Method for Multi-Class Classification

    DTIC Science & Technology

    2015-05-22

    recognize to specific antigens . The response of a receptor to an antigen can activate its hosting B-cell. Activated B-cell then proliferates and...modifying N.K. Jerne’s theory. The theory states that in a pre-existing group of lympho- cytes ( specifically B cells), a specific antigen only...the clusters of each small class, which have high data density, called global immune centroids over-sampling (denoted as Global-IC). Specifically

  5. Deep neural network-based domain adaptation for classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Ma, Li; Song, Jiazhen

    2017-10-01

    We investigate the effectiveness of deep neural network for cross-domain classification of remote sensing images in this paper. In the network, class centroid alignment is utilized as a domain adaptation strategy, making the network able to transfer knowledge from the source domain to target domain on a per-class basis. Since predicted labels of target data should be used to estimate the centroid of each class, we use overall centroid alignment as a coarse domain adaptation method to improve the estimation accuracy. In addition, rectified linear unit is used as the activation function to produce sparse features, which may improve the separation capability. The proposed network can provide both aligned features and an adaptive classifier, as well as obtain label-free classification of target domain data. The experimental results using Hyperion, NCALM, and WorldView-2 remote sensing images demonstrated the effectiveness of the proposed approach.

  6. Comparative Analysis of Document level Text Classification Algorithms using R

    NASA Astrophysics Data System (ADS)

    Syamala, Maganti; Nalini, N. J., Dr; Maguluri, Lakshamanaphaneendra; Ragupathy, R., Dr.

    2017-08-01

    From the past few decades there has been tremendous volumes of data available in Internet either in structured or unstructured form. Also, there is an exponential growth of information on Internet, so there is an emergent need of text classifiers. Text mining is an interdisciplinary field which draws attention on information retrieval, data mining, machine learning, statistics and computational linguistics. And to handle this situation, a wide range of supervised learning algorithms has been introduced. Among all these K-Nearest Neighbor(KNN) is efficient and simplest classifier in text classification family. But KNN suffers from imbalanced class distribution and noisy term features. So, to cope up with this challenge we use document based centroid dimensionality reduction(CentroidDR) using R Programming. By combining these two text classification techniques, KNN and Centroid classifiers, we propose a scalable and effective flat classifier, called MCenKNN which works well substantially better than CenKNN.

  7. A motion detection system for AXAF X-ray ground testing

    NASA Technical Reports Server (NTRS)

    Arenberg, Jonathan W.; Texter, Scott C.

    1993-01-01

    The concept, implementation, and performance of the motion detection system (MDS) designed as a diagnostic for X-ray ground testing for AXAF are described. The purpose of the MDS is to measure the magnitude of a relative rigid body motion among the AXAF test optic, the X-ray source, and X-ray focal plane detector. The MDS consists of a point source, lens, centroid detector, transimpedance amplifier, and computer system. Measurement of the centroid position of the image of the optical point source provides a direct measure of the motions of the X-ray optical system. The outputs from the detector and filter/amplifier are digitized and processed using the calibration with a 50 Hz bandwidth to give the centroid's location on the detector. Resolution of 0.008 arcsec has been achieved by this system. Data illustrating the performance of the motion detection system are also presented.

  8. Assessment of auditory impression of the coolness and warmness of automotive HVAC noise.

    PubMed

    Nakagawa, Seiji; Hotehama, Takuya; Kamiya, Masaru

    2017-07-01

    Noise induced by a heating, ventilation and air conditioning (HVAC) system in a vehicle is an important factor that affects the comfort of the interior of a car cabin. Much effort has been devoted to reduce noise levels, however, there is a need for a new sound design that addresses the noise problem from a different point of view. In this study, focusing on the auditory impression of automotive HVAC noise concerning coolness and warmness, psychoacoustical listening tests were performed using a paired comparison technique under various conditions of room temperature. Five stimuli were synthesized by stretching the spectral envelopes of recorded automotive HVAC noise to assess the effect of the spectral centroid, and were presented to normal-hearing subjects. Results show that the spectral centroid significantly affects the auditory impression concerning coolness and warmness; a higher spectral centroid induces a cooler auditory impression regardless of the room temperature.

  9. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  10. Text String Detection from Natural Scenes by Structure-based Partition and Grouping

    PubMed Central

    Yi, Chucai; Tian, YingLi

    2012-01-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) Image partition to find text character candidates based on local gradient features and color uniformity of character components. 2) Character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method, and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in non-horizontal orientations. PMID:21411405

  11. Intra- and Interspecific Interactions as Proximate Determinants of Sexual Dimorphism and Allometric Trajectories in the Bottlenose Dolphin Tursiops truncatus (Cetacea, Odontoceti, Delphinidae)

    PubMed Central

    2016-01-01

    Feeding adaptation, social behaviour, and interspecific interactions related to sexual dimorphism and allometric growth are particularly challenging to be investigated in the high sexual monomorphic Delphinidae. We used geometric morphometrics to extensively explore sexual dimorphism and ontogenetic allometry of different projections of the skull and the mandible of the bottlenose dolphin Tursiops truncatus. Two-dimensional landmarks were recorded on the dorsal, ventral, lateral, and occipital views of the skull, and on the lateral view of the left and the right mandible of 104 specimens from the Mediterranean and the North Seas, differing environmental condition and degree of interspecific associations. Landmark configurations were transformed, standardized and superimposed through a Generalized Procrustes Analysis. Size and shape differences between adult males and females were respectively evaluated through ANOVA on centroid size, Procrustes ANOVA on Procrustes distances, and MANOVA on Procrustes coordinates. Ontogenetic allometry was investigated by multivariate regression of shape coordinates on centroid size in the largest homogenous sample from the North Sea. Results evidenced sexual dimorphic asymmetric traits only detected in the adults of the North Sea bottlenose dolphins living in monospecific associations, with females bearing a marked incision of the cavity hosting the left tympanic bulla. These differences were related to a more refined echolocalization system that likely enhances the exploitation of local resources by philopatric females. Distinct shape in immature versus mature stages and asymmetric changes in postnatal allometry of dorsal and occipital traits, suggest that differences between males and females are established early during growth. Allometric growth trajectories differed between males and females for the ventral view of the skull. Allometric trajectories differed among projections of skull and mandible, and were related to dietary shifts experienced by subadults and adults. PMID:27764133

  12. Text string detection from natural scenes by structure-based partition and grouping.

    PubMed

    Yi, Chucai; Tian, YingLi

    2011-09-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from a complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) image partition to find text character candidates based on local gradient features and color uniformity of character components and 2) character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset, which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in nonhorizontal orientations.

  13. Verbal Inflectional Morphology in L1 and L2 Spanish: A Frequency Effects Study Examining Storage versus Composition

    PubMed Central

    Bowden, Harriet Wood; Gelfand, Matthew P.; Sanz, Cristina; Ullman, Michael T.

    2009-01-01

    This study examines the storage vs. composition of Spanish inflected verbal forms in L1 and L2 speakers of Spanish. L2 participants were selected to have mid-to-advanced proficiency, high classroom experience, and low immersion experience, typical of medium-to-advanced foreign language learners. Participants were shown the infinitival forms of verbs from either Class I (the default class, which takes new verbs) or Classes II and III (non-default classes), and were asked to produce either first-person singular present-tense or imperfect forms, in separate tasks. In the present tense, the L1 speakers showed inflected-form frequency effects (i.e., higher frequency forms were produced faster, which is taken as a reflection of storage) for stem-changing (irregular) verb-forms from both Class I (e.g., pensar-pienso) and Classes II and III (e.g., perder-pierdo), as well as for non-stem-changing (regular) forms in Classes II/III (e.g., vender-vendo), in which the regular transformation does not appear to constitute a default. In contrast, Class I regulars (e.g., pescar-pesco), whose non-stem-changing transformation constitutes a default (e.g., it is applied to new verbs), showed no frequency effects. L2 speakers showed frequency effects for all four conditions (Classes I and II/III, regulars and irregulars). In the imperfect tense, the L1 speakers showed frequency effects for Class II/III (-ía-suffixed) but not Class I (-aba-suffixed) forms, even though both involve non-stem-change (regular) default transformations. The L2 speakers showed frequency effects for both types of forms. The pattern of results was not explained by a wide range of potentially confounding experimental and statistical factors, and does not appear to be compatible with single-mechanism models, which argue that all linguistic forms are learned and processed in associative memory. The findings are consistent with a dual-system view in which both verb class and regularity influence the storage vs. composition of inflected forms. Specifically, the data suggest that in L1, inflected verbal forms are stored (as evidenced by frequency effects) unless they are both from Class I and undergo non-stem-changing default transformations. In contrast the findings suggest that at least these L2 participants may store all inflected verb-forms. Taken together, the results support dual-system models of L1 and L2 processing in which, at least at mid-to-advanced L2 proficiency and lower levels of immersion experience, the processing of rule-governed forms may depend not on L1 combinatorial processes, but instead on memorized representations. PMID:20419083

  14. TOUCHSTONE II: a new approach to ab initio protein structure prediction.

    PubMed

    Zhang, Yang; Kolinski, Andrzej; Skolnick, Jeffrey

    2003-08-01

    We have developed a new combined approach for ab initio protein structure prediction. The protein conformation is described as a lattice chain connecting C(alpha) atoms, with attached C(beta) atoms and side-chain centers of mass. The model force field includes various short-range and long-range knowledge-based potentials derived from a statistical analysis of the regularities of protein structures. The combination of these energy terms is optimized through the maximization of correlation for 30 x 60,000 decoys between the root mean square deviation (RMSD) to native and energies, as well as the energy gap between native and the decoy ensemble. To accelerate the conformational search, a newly developed parallel hyperbolic sampling algorithm with a composite movement set is used in the Monte Carlo simulation processes. We exploit this strategy to successfully fold 41/100 small proteins (36 approximately 120 residues) with predicted structures having a RMSD from native below 6.5 A in the top five cluster centroids. To fold larger-size proteins as well as to improve the folding yield of small proteins, we incorporate into the basic force field side-chain contact predictions from our threading program PROSPECTOR where homologous proteins were excluded from the data base. With these threading-based restraints, the program can fold 83/125 test proteins (36 approximately 174 residues) with structures having a RMSD to native below 6.5 A in the top five cluster centroids. This shows the significant improvement of folding by using predicted tertiary restraints, especially when the accuracy of side-chain contact prediction is >20%. For native fold selection, we introduce quantities dependent on the cluster density and the combination of energy and free energy, which show a higher discriminative power to select the native structure than the previously used cluster energy or cluster size, and which can be used in native structure identification in blind simulations. These procedures are readily automated and are being implemented on a genomic scale.

  15. A pipeline design of a fast prime factor DFT on a finite field

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun

    1988-01-01

    A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.

  16. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  17. The Study Of Optometry Apparatus Of Laser Speckles

    NASA Astrophysics Data System (ADS)

    Bao-cheng, Wang; Kun, Yao; Xiu-qing, Wu; Chang-ying, Long; Jia-qi, Shi; Shi-zhong, Shi

    1988-01-01

    Based on the regularity of laser speckles movement the method of exam the uncorrected eyes is determined. The apparatus with micro-computer and optical transformation is made. Its practical function is excellent.

  18. Quantum nuclear effects in water using centroid molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kondratyuk, N. D.; Norman, G. E.; Stegailov, V. V.

    2018-01-01

    The quantum nuclear effects are studied in water using the method of centroid molecular dynamics (CMD). The aim is the calibration of CMD implementation in LAMMPS. The calculated intramolecular energy, atoms gyration radii and radial distribution functions are shown in comparison with previous works. The work is assumed to be the step toward to solution of the discrepancy between the simulation results and the experimental data of liquid n-alkane properties in our previous works.

  19. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  20. Improving experimental phases for strong reflections prior to density modification

    DOE PAGES

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...

    2013-09-20

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  1. An adaptive tracker for ShipIR/NTCS

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Srinivasan; Vaitekunas, David A.

    2015-05-01

    A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and the gating of the selected target to further improve tracker performance. This paper will describe a new adaptive tracker algorithm added to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). The new adaptive tracking algorithm is an optional feature used with any of the existing internal NTCS or user-defined seeker algorithms (e.g., binary centroid, intensity centroid, and threshold intensity centroid). The algorithm segments the detected pixels into clusters, and the smallest set of clusters that meet the detection criterion is obtained by using a knapsack algorithm to identify the set of clusters that should not be used. The rectangular area containing the chosen clusters defines an inner boundary, from which a weighted centroid is calculated as the aim-point. A track-gate is then positioned around the clusters, taking into account the rate of change of the bounding area and compensating for any gimbal displacement. A sequence of scenarios is used to test the new tracking algorithm on a generic unclassified DDG ShipIR model, with and without flares, and demonstrate how some of the key seeker signals are impacted by both the ship and flare intrinsic signatures.

  2. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  3. Alignment error of mirror modules of advanced telescope for high-energy astrophysics due to wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Zocchi, Fabio E.

    2017-10-01

    One of the approaches that is being tested for the integration of the mirror modules of the advanced telescope for high-energy astrophysics x-ray mission of the European Space Agency consists in aligning each module on an optical bench operated at an ultraviolet wavelength. The mirror module is illuminated by a plane wave and, in order to overcome diffraction effects, the centroid of the image produced by the module is used as a reference to assess the accuracy of the optical alignment of the mirror module itself. Among other sources of uncertainty, the wave-front error of the plane wave also introduces an error in the position of the centroid, thus affecting the quality of the mirror module alignment. The power spectral density of the position of the point spread function centroid is here derived from the power spectral density of the wave-front error of the plane wave in the framework of the scalar theory of Fourier diffraction. This allows the defining of a specification on the collimator quality used for generating the plane wave starting from the contribution to the error budget allocated for the uncertainty of the centroid position. The theory generally applies whenever Fourier diffraction is a valid approximation, in which case the obtained result is identical to that derived by geometrical optics considerations.

  4. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avkshtol, V; Tanny, S; Reddy, K

    Purpose: Stereotactic radiation therapy (SRT) provides an excellent alternative to embolization and surgical excision for the management of appropriately selected cerebral arteriovenous malformations (AVMs). The currently accepted standard for delineating AVMs is planar digital subtraction angiography (DSA). DSA can be used to acquire a 3D data set that preserves osseous structures (3D-DA) at the time of the angiography for SRT planning. Magnetic resonance imaging (MRI) provides an alternative noninvasive method of visualizing the AVM nidus with comparable spatial resolution. We utilized 3D-DA and T1 post-contrast MRI data to evaluate the differences in SRT target volumes. Methods: Four patients underwent 3D-DAmore » and high-resolution MRI. 3D T1 post-contrast images were obtained in all three reconstruction planes. A planning CT was fused with MRI and 3D-DA data sets. The AVMs were contoured utilizing one of the image sets at a time. Target volume, centroid, and maximum and minimum dimensions were analyzed for each patient. Results: Targets delineated using post-contrast MRI demonstrated a larger mean volume. AVMs >2 cc were found to have a larger difference between MRI and 3D-DA volumes. Larger AVMs also demonstrated a smaller relative uncertainty in contour centroid position (1 mm). AVM targets <2 cc had smaller absolute differences in volume, but larger differences in contour centroid position (2.5 mm). MRI targets demonstrated a more irregular shape compared to 3D-DA targets. Conclusions: Our preliminary data supports the use of MRI alone to delineate AVM targets >2 cc. The greater centroid stability for AVMs >2 cc ensures accurate target localization during image fusion. The larger MRI target volumes did not result in prohibitively greater volumes of normal brain tissue receiving the prescription dose. The larger centroid instability for AVMs <2 cc precludes the use of MRI alone for target delineation. We recommend incorporating a 3D-DA for these patients.« less

  6. CRISPR/Cas9-Assisted Transformation-Efficient Reaction (CRATER) for Near-Perfect Selective Transformation

    NASA Technical Reports Server (NTRS)

    Rothschild, Lynn J.; Greenberg, Daniel T.; Takahashi, Jack R.; Thompson, Kirsten A.; Maheshwari, Akshay J.; Kent, Ryan E.; McCutcheon, Griffin; Shih, Joseph D.; Calvet, Charles; Devlin, Tyler D.; hide

    2015-01-01

    The CRISPR (Clustered, Regularly Interspaced, Short Palindromic Repeats)/Cas9 system has revolutionized genome editing by providing unprecedented DNA-targeting specificity. Here we demonstrate that this system can be also applied in vitro to fundamental cloning steps to facilitate efficient plasmid selection for transformation and selective gene insertion into plasmid vectors by cleaving unwanted plasmid byproducts with a single-guide RNA (sgRNA)-Cas9 nuclease complex. Using fluorescent and chromogenic proteins as reporters, we demonstrate that CRISPR/Cas9 cleavage excludes multiple plasmids as well as unwanted ligation byproducts resulting in an unprecedented increase in the transformation success rate from approximately 20% to nearly 100%. Thus, this CRISPR/Cas9-Assisted Transformation-Efficient Reaction (CRATER) protocol is a novel, inexpensive, and convenient application to conventional molecular cloning to achieve near-perfect selective transformation.

  7. Incipient fault diagnosis of power transformers using optical spectro-photometric technique

    NASA Astrophysics Data System (ADS)

    Hussain, K.; Karmakar, Subrata

    2015-06-01

    Power transformers are the vital equipment in the network of power generation, transmission and distribution. Mineral oil in oil-filled transformers plays very important role as far as electrical insulation for the winding and cooling of the transformer is concerned. As transformers are always under the influence of electrical and thermal stresses, incipient faults like partial discharge, sparking and arcing take place. As a result, mineral oil deteriorates there by premature failure of the transformer occurs causing huge losses in terms of revenue and assets. Therefore, the transformer health condition has to be monitored continuously. The Dissolved Gas Analysis (DGA) is being extensively used for this purpose, but it has some drawbacks like it needs carrier gas, regular instrument calibration, etc. To overcome these drawbacks, Ultraviolet (UV) -Visible and Fourier Transform Infrared (FTIR) Spectro-photometric techniques are used as diagnostic tools for investigating the degraded transformer oil affected by electrical, mechanical and thermal stresses. The technique has several advantages over the conventional DGA technique.

  8. Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction

    PubMed Central

    Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu’s segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images. PMID:28515636

  9. Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction.

    PubMed

    Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu's segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images.

  10. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  11. Low-light-level image super-resolution reconstruction based on iterative projection photon localization algorithm

    NASA Astrophysics Data System (ADS)

    Ying, Changsheng; Zhao, Peng; Li, Ye

    2018-01-01

    The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.

  12. JASMINE project Instrument design and centroiding experiment

    NASA Astrophysics Data System (ADS)

    Yano, Taihei; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki

    JASMINE will study the fundamental structure and evolution of the Milky Way Galaxy. To accomplish these objectives, JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μarcsec at z = 14 mag. In this paper the instrument design (optics, detectors, etc.) of JASMINE is presented. We also show a CCD centroiding experiment for estimating positions of star images. The experimental result shows that the accuracy of estimated distances has a variance of less than 0.01 pixel.

  13. Self-aligning biaxial load frame

    DOEpatents

    Ward, M.B.; Epstein, J.S.; Lloyd, W.R.

    1994-01-18

    An self-aligning biaxial loading apparatus for use in testing the strength of specimens while maintaining a constant specimen centroid during the loading operation. The self-aligning biaxial loading apparatus consists of a load frame and two load assemblies for imparting two independent perpendicular forces upon a test specimen. The constant test specimen centroid is maintained by providing elements for linear motion of the load frame relative to a fixed cross head, and by alignment and linear motion elements of one load assembly relative to the load frame. 3 figures.

  14. Self-aligning biaxial load frame

    DOEpatents

    Ward, Michael B.; Epstein, Jonathan S.; Lloyd, W. Randolph

    1994-01-01

    An self-aligning biaxial loading apparatus for use in testing the strength of specimens while maintaining a constant specimen centroid during the loading operation. The self-aligning biaxial loading apparatus consists of a load frame and two load assemblies for imparting two independent perpendicular forces upon a test specimen. The constant test specimen centroid is maintained by providing elements for linear motion of the load frame relative to a fixed crosshead, and by alignment and linear motion elements of one load assembly relative to the load frame.

  15. Centroid — moment tensor solutions for July-September 2000

    NASA Astrophysics Data System (ADS)

    Dziewonski, A. M.; Ekström, G.; Maternovskaya, N. N.

    2001-06-01

    Centroid-moment tensor (CMT) solutions are presented for 308 earthquakes that occurred during the third quarter of 2000. The solutions are obtained using corrections for aspherical earth structure represented by a whole mantle shear velocity model SH8/U4L8 of Dziewonski and Woodward [Acoustical Imaging, Vol. 19, Plenum Press, New York, 1992, p. 785]. A model of anelastic attenuation of Durek and Ekström [Bull. Seism. Soc. Am. 86 (1996) 144] is used to predict the decay of the wave forms.

  16. Picric acid-2,4,6-trichloro-aniline (1/1).

    PubMed

    Wang, Wan-Qiang

    2011-04-01

    In the title adduct, C(6)H(4)Cl(3)N·C(6)H(3)N(3)O(7), the two benzene rings are almost coplanar, with a dihedral angle of 1.19 (1)° and an inter-ring centroid-centroid separation of 4.816 (2) Å. The crystal structure is stabilized by inter-molecular N-H⋯O(nitro) hydrogen bonds, giving a chain structure. In addition, there are phenol-nitro O-H⋯O inter-actions.

  17. 4-[(1E)-3-(2,6-Dichloro-3-fluoro-phen-yl)-3-oxoprop-1-en-1-yl]benzonitrile.

    PubMed

    Praveen, Aletti S; Yathirajan, Hemmige S; Narayana, Badiadka; Gerber, Thomas; Hosten, Eric; Betz, Richard

    2012-05-01

    In the title mol-ecule, C(16)H(8)Cl(2)FNO, the benzene rings form a dihedral angle of 78.69 (8)°. The F atom is disordered over two positions in a 0.530 (3):0.470 (3) ratio. The crystal packing exhibits π-π inter-actions between dichloro-substituted rings [centroid-centroid distance = 3.6671 (10) Å] and weak inter-molecular C-H⋯F contacts.

  18. Global regularity for a family of 3D models of the axi-symmetric Navier–Stokes equations

    NASA Astrophysics Data System (ADS)

    Hou, Thomas Y.; Liu, Pengfei; Wang, Fei

    2018-05-01

    We consider a family of three-dimensional models for the axi-symmetric incompressible Navier–Stokes equations. The models are derived by changing the strength of the convection terms in the axisymmetric Navier–Stokes equations written using a set of transformed variables. We prove the global regularity of the family of models in the case that the strength of convection is slightly stronger than that of the original Navier–Stokes equations, which demonstrates the potential stabilizing effect of convection.

  19. Array architectures for iterative algorithms

    NASA Technical Reports Server (NTRS)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  20. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  1. Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.

    PubMed

    Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos

    2010-07-01

    To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.

  2. Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform

    PubMed Central

    Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos

    2013-01-01

    Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028

  3. Helicity is the only integral invariant of volume-preserving transformations

    PubMed Central

    Enciso, Alberto; Peralta-Salas, Daniel; de Lizaur, Francisco Torres

    2016-01-01

    We prove that any regular integral invariant of volume-preserving transformations is equivalent to the helicity. Specifically, given a functional ℐ defined on exact divergence-free vector fields of class C1 on a compact 3-manifold that is associated with a well-behaved integral kernel, we prove that ℐ is invariant under arbitrary volume-preserving diffeomorphisms if and only if it is a function of the helicity. PMID:26864201

  4. Darboux partners of pseudoscalar Dirac potentials associated with exceptional orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: xbataxel@gmail.com; Department of Physics, Indiana University Northwest, 3400 Broadway, Gary, IN 46408; Roy, Barnana, E-mail: barnana@isical.ac.in

    2014-10-15

    We introduce a method for constructing Darboux (or supersymmetric) pairs of pseudoscalar and scalar Dirac potentials that are associated with exceptional orthogonal polynomials. Properties of the transformed potentials and regularity conditions are discussed. As an application, we consider a pseudoscalar Dirac potential related to the Schrödinger model for the rationally extended radial oscillator. The pseudoscalar partner potentials are constructed under the first- and second-order Darboux transformations.

  5. Micro-XANES Determination Fe Speciation in Natural Basalts at Mantle-Relevant fO2

    NASA Astrophysics Data System (ADS)

    Fischer, R.; Cottrell, E.; Lanzirotti, A.; Kelley, K. A.

    2007-12-01

    We demonstrate that the oxidation state of iron (Fe3+/ΣFe) can be determined with a precision of ±0.02 (10% relative) on natural basalt glasses at mantle-relevant fO2 using Fe K-edge X-ray absorption near edge structure (XANES) spectroscopy. This is equivalent to ±0.25 log unit resolution relative to the QFM buffer. Precise determination of the oxidation state over this narrow range (Fe3+/ΣFe=0.06-0.30) and at low fO2 (down to QFM-2) relies on appropriate standards, high spectral resolution, and highly reproducible methods for extracting the pre-edge centroid position. We equilibrated natural tholeiite powder in a CO/CO2 gas mixing furnace at 1350°C from QFM-3 to QFM+2 to create six glasses of known Fe3+/ΣFe, independently determined by Mössbauer spectroscopy. XANES spectra were collected at station X26A at NSLS, Brookhaven Natl. Lab, in fluorescence mode (9 element Ge array detector) using both Si(111) and Si(311) monochromators. Generally, the energy position of the 1s→3d (pre-edge) transition centroid is the most sensitive monitor of Fe oxidation state using XANES. For the mixture of Fe oxidation states in these glasses and the resulting coordination geometries, the pre-edge spectra are best defined by two multiple 3d crystal field transitions. The Si(311) monochromator, with higher energy resolution, substantially improved spectral resolution for the 1s→3d transition. Dwell times of 5s at 0.1eV intervals across the pre-edge region yielded spectra with the 1s→3d transition peaks clearly resolved. The pre-edge centroid position is highly sensitive to the background subtraction and peak fitting procedures. Differences in fitting models result in small but significant differences in the calculated peak area of each pre-edge multiplet, and the relative contribution of each peak to the calculated centroid. We assessed several schemes and obtained robust centroid positions by simultaneously fitting the background with a damped harmonic oscillator (DHO) function and pre-edge features with two Gaussians over a sub-sample of the pre-edge region (7110-7120 eV). We found that the relation between Fe3+/ΣFe and the centroid energy is non-linear over this fO2 range, which is expected if the coordination environment changes with oxidation state. ΔQFM is linearly related (R2=0.99) to the centroid position. This new calibration allows the oxidation states of natural mantle melts to be discriminated with high spatial resolution (9μm). We apply the new calibration to determination of Fe3+/ΣFe in natural basaltic glasses and olivine-hosted glass inclusions (Cottrell et al. & Kelley et al., this meeting).

  6. On regularizing the MCTDH equations of motion

    NASA Astrophysics Data System (ADS)

    Meyer, Hans-Dieter; Wang, Haobin

    2018-03-01

    The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.

  7. Hartman Testing of X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Biskasch, Michael; Zhang, William W.

    2013-01-01

    Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.

  8. Centroid-moment tensor inversions using high-rate GPS waveforms

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.

    2012-10-01

    Displacement time-series recorded by Global Positioning System (GPS) receivers are a new type of near-field waveform observation of the seismic source. We have developed an inversion method which enables the recovery of an earthquake's mechanism and centroid coordinates from such data. Our approach is identical to that of the 'classical' Centroid-Moment Tensor (CMT) algorithm, except that we forward model the seismic wavefield using a method that is amenable to the efficient computation of synthetic GPS seismograms and their partial derivatives. We demonstrate the validity of our approach by calculating CMT solutions using 1 Hz GPS data for two recent earthquakes in Japan. These results are in good agreement with independently determined source models of these events. With wider availability of data, we envisage the CMT algorithm providing a tool for the systematic inversion of GPS waveforms, as is already the case for teleseismic data. Furthermore, this general inversion method could equally be applied to other near-field earthquake observations such as those made using accelerometers.

  9. Spatial pattern recognition of seismic events in South West Colombia

    NASA Astrophysics Data System (ADS)

    Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber

    2013-09-01

    Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.

  10. Application of the multiple PRF technique to resolve Doppler centroid estimation ambiguity for spaceborne SAR

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1992-01-01

    Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.

  11. Optimization of soy isoflavone extraction with different solvents using the simplex-centroid mixture design.

    PubMed

    Yoshiara, Luciane Yuri; Madeira, Tiago Bervelieri; Delaroza, Fernanda; da Silva, Josemeyre Bonifácio; Ida, Elza Iouko

    2012-12-01

    The objective of this study was to optimize the extraction of different isoflavone forms (glycosidic, malonyl-glycosidic, aglycone and total) from defatted cotyledon soy flour using the simplex-centroid experimental design with four solvents of varying polarity (water, acetone, ethanol and acetonitrile). The obtained extracts were then analysed by high-performance liquid chromatography. The profile of the different soy isoflavones forms varied with different extractions solvents. Varying the solvent or mixture used, the extraction of different isoflavones was optimized using the centroid-simplex mixture design. The special cubic model best fitted to the four solvents and its combination for soy isoflavones extraction. For glycosidic isoflavones extraction, the polar ternary mixture (water, acetone and acetonitrile) achieved the best extraction; malonyl-glycosidic forms were better extracted with mixtures of water, acetone and ethanol. Aglycone isoflavones, water and acetone mixture were best extracted and total isoflavones, the best solvents were ternary mixture of water, acetone and ethanol.

  12. Novel method of detecting movement of the interference fringes using one-dimensional PSD.

    PubMed

    Wang, Qi; Xia, Ji; Liu, Xu; Zhao, Yong

    2015-06-02

    In this paper, a method of using a one-dimensional position-sensitive detector (PSD) by replacing charge-coupled device (CCD) to measure the movement of the interference fringes is presented first, and its feasibility is demonstrated through an experimental setup based on the principle of centroid detection. Firstly, the centroid position of the interference fringes in a fiber Mach-Zehnder (M-Z) interferometer is solved in theory, showing it has a higher resolution and sensitivity. According to the physical characteristics and principles of PSD, a simulation of the interference fringe's phase difference in fiber M-Z interferometers and PSD output is carried out. Comparing the simulation results with the relationship between phase differences and centroid positions in fiber M-Z interferometers, the conclusion that the output of interference fringes by PSD is still the centroid position is obtained. Based on massive measurements, the best resolution of the system is achieved with 5.15, 625 μm. Finally, the detection system is evaluated through setup error analysis and an ultra-narrow-band filter structure. The filter structure is configured with a one-dimensional photonic crystal containing positive and negative refraction material, which can eliminate background light in the PSD detection experiment. This detection system has a simple structure, good stability, high precision and easily performs remote measurements, which makes it potentially useful in material small deformation tests, refractivity measurements of optical media and optical wave front detection.

  13. Human attention filters for single colors.

    PubMed

    Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George

    2016-10-25

    The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA). FBA is best described by attention filters that specify precisely the extent to which items containing attended features are selectively processed and the extent to which items that do not contain the attended features are attenuated. The centroid-judgment paradigm enables quick, precise measurements of such human perceptual attention filters, analogous to transmission measurements of photographic color filters. Subjects use a mouse to locate the centroid-the center of gravity-of a briefly displayed cloud of dots and receive precise feedback. A subset of dots is distinguished by some characteristic, such as a different color, and subjects judge the centroid of only the distinguished subset (e.g., dots of a particular color). The analysis efficiently determines the precise weight in the judged centroid of dots of every color in the display (i.e., the attention filter for the particular attended color in that context). We report 32 attention filters for single colors. Attention filters that discriminate one saturated hue from among seven other equiluminant distractor hues are extraordinarily selective, achieving attended/unattended weight ratios >20:1. Attention filters for selecting a color that differs in saturation or lightness from distractors are much less selective than attention filters for hue (given equal discriminability of the colors), and their filter selectivities are proportional to the discriminability distance of neighboring colors, whereas in the same range hue attention-filter selectivity is virtually independent of discriminabilty.

  14. The Generalized Centroid Difference method for lifetime measurements via γ-γ coincidences using large fast-timing arrays

    NASA Astrophysics Data System (ADS)

    Régis, J.-M.; Jolie, J.; Mach, H.; Simpson, G. S.; Blazhev, A.; Pascovici, G.; Pfeiffer, M.; Rudigier, M.; Saed-Samii, N.; Warr, N.; Blanc, A.; de France, G.; Jentschel, M.; Köster, U.; Mutti, P.; Soldner, T.; Ur, C. A.; Urban, W.; Bruce, A. M.; Drouet, F.; Fraile, L. M.; Ilieva, S.; Korten, W.; Kröll, T.; Lalkovski, S.; Mărginean, S.; Paziy, V.; Podolyák, Zs.; Regan, P. H.; Stezowski, O.; Vancraeyenest, A.

    2015-05-01

    A novel method for direct electronic "fast-timing" lifetime measurements of nuclear excited states via γ-γ coincidences using an array equipped with N very fast high-resolution LaBr3(Ce) scintillator detectors is presented. The generalized centroid difference method provides two independent "start" and "stop" time spectra obtained without any correction by a superposition of the N(N - 1)/2 calibrated γ-γ time difference spectra of the N detector fast-timing system. The two fast-timing array time spectra correspond to a forward and reverse gating of a specific γ-γ cascade and the centroid difference as the time shift between the centroids of the two time spectra provides a picosecond-sensitive mirror-symmetric observable of the set-up. The energydependent mean prompt response difference between the start and stop events is calibrated and used as a single correction for lifetime determination. These combined fast-timing array mean γ-γ zero-time responses can be determined for 40 keV < Eγ < 1.4 MeV with a precision better than 10 ps using a 152Eu γ-ray source. The new method is described with examples of (n,γ) and (n,f,γ) experiments performed at the intense cold-neutron beam facility PF1B of the Institut Laue-Langevin in Grenoble, France, using 16 LaBr3(Ce) detectors within the EXILL&FATIMA campaign in 2013. The results are discussed with respect to possible systematic errors induced by background contributions.

  15. Empirical Model of Precipitating Ion Oval

    NASA Astrophysics Data System (ADS)

    Goldstein, Jerry

    2017-10-01

    In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.

  16. Evidence against global attention filters selective for absolute bar-orientation in human vision.

    PubMed

    Inverso, Matthew; Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George

    2016-01-01

    The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task.

  17. An Investigation on the Use of Different Centroiding Algorithms and Star Catalogs in Astro-Geodetic Observations

    NASA Astrophysics Data System (ADS)

    Basoglu, Burak; Halicioglu, Kerem; Albayrak, Muge; Ulug, Rasit; Tevfik Ozludemir, M.; Deniz, Rasim

    2017-04-01

    In the last decade, the importance of high-precise geoid determination at local or national level has been pointed out by Turkish National Geodesy Commission. The Commission has also put objective of modernization of national height system of Turkey to the agenda. Meanwhile several projects have been realized in recent years. In Istanbul city, a GNSS/Levelling geoid was defined in 2005 for the metropolitan area of the city with an accuracy of ±3.5cm. In order to achieve a better accuracy in this area, "Local Geoid Determination with Integration of GNSS/Levelling and Astro-Geodetic Data" project has been conducted in Istanbul Technical University and Bogazici University KOERI since January 2016. The project is funded by The Scientific and Technological Research Council of Turkey. With the scope of the project, modernization studies of Digital Zenith Camera System are being carried on in terms of hardware components and software development. Accentuated subjects are the star catalogues, and centroiding algorithm used to identify the stars on the zenithal star field. During the test observations of Digital Zenith Camera System performed between 2013-2016, final results were calculated using the PSF method for star centroiding, and the second USNO CCD Astrograph Catalogue (UCAC2) for the reference star positions. This study aims to investigate the position accuracy of the star images by comparing different centroiding algorithms and available star catalogs used in astro-geodetic observations conducted with the digital zenith camera system.

  18. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    PubMed

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  19. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    NASA Astrophysics Data System (ADS)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorbachev, D V; Ivanov, V I

    Gauss and Markov quadrature formulae with nodes at zeros of eigenfunctions of a Sturm-Liouville problem, which are exact for entire functions of exponential type, are established. They generalize quadrature formulae involving zeros of Bessel functions, which were first designed by Frappier and Olivier. Bessel quadratures correspond to the Fourier-Hankel integral transform. Some other examples, connected with the Jacobi integral transform, Fourier series in Jacobi orthogonal polynomials and the general Sturm-Liouville problem with regular weight are also given. Bibliography: 39 titles.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakach, G. P.; Dudarev, E. F., E-mail: dudarev@spti.tsu.ru; Skosyrskii, A. B.

    The results are presented of an experimental investigation into the regularities and mechanisms of the development of thermoelastic martensitic transformation in submicrocrystalline alloy Ti{sub 49.4}Ni{sub 50.6} with different ways of thermo-power actions using the methods of optical microscopy in situ and X-ray diffraction. The peculiarities of localization of martensite transformation at the meso- and macroscale levels in this alloy with submicrocrystalline structure are considered. Experimental data on the relay mechanism of propagation of the martensitic transformation are presented. The interrelation between the localization of the martensitic transformation on the meso-and macroscale levels and deformation behavior under isothermal loading alloy Ti{submore » 49.4}Ni5{sub 0.6} in submicrocrystalline condition are shown and discussed.« less

  2. Assessment of Survivability against Laser Threats. The ASALT-I Computer Program

    DTIC Science & Technology

    1981-09-01

    NUM4ER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. I - f ~ ~ ’ECUftITt CL.inWCATOM Or TII PAGEL Cu18.. De 3Sawe no"___VISA__________1""I REPORT...subsection. COORDINATE SYSTEMS The four coordinate systems used in the ASALT-I Model are de -I picted in Figure 2-1, where the subscripts on each axis identify...centroid in the Enc,’, inter Coordinate System 2i z-coordinate of the component centroid in the Encounter Coordinate System gy width of the component

  3. X-Ray Properties of Lensing-Selected Clusters

    NASA Astrophysics Data System (ADS)

    Paterno-Mahler, Rachel; Sharon, Keren; Bayliss, Matthew; McDonald, Michael; Gladders, Michael; Johnson, Traci; Dahle, Hakon; Rigby, Jane R.; Whitaker, Katherine E.; Florian, Michael; Wuyts, Eva

    2017-08-01

    I will present preliminary results from the Michigan Swift X-ray observations of clusters from the Sloan Giant Arcs Survey (SGAS). These clusters were lensing selected based on the presence of a giant arc visible from SDSS. I will characterize the morphology of the intracluster medium (ICM) of the clusters in the sample, and discuss the offset between the X-ray centroid, the mass centroid as determined by strong lensing analysis, and the BCG position. I will also present early-stage work on the scaling relation between the lensing mass and the X-ray luminosity.

  4. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  5. High-resolution seismic data regularization and wavefield separation

    NASA Astrophysics Data System (ADS)

    Cao, Aimin; Stump, Brian; DeShon, Heather

    2018-04-01

    We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.

  6. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.

    1981-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.

  7. A VLSI pipeline design of a fast prime factor DFT on a finite field

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Shao, H. M.; Reed, I. S.; Shyu, H. C.

    1986-01-01

    A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). A pipeline structure is used to implement this prime factor DFT over GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.

  8. The effect of time-variant acoustical properties on orchestral instrument timbres

    NASA Astrophysics Data System (ADS)

    Hajda, John Michael

    1999-06-01

    The goal of this study was to investigate the timbre of orchestral instrument tones. Kendall (1986) showed that time-variant features are important to instrument categorization. But the relative salience of specific time-variant features to each other and to other acoustical parameters is not known. As part of a convergence strategy, a battery of experiments was conducted to assess the importance of global amplitude envelope, spectral frequencies, and spectral amplitudes. An omnibus identification experiment investigated the salience of global envelope partitions (attack, steady state, and decay). Valid partitioning models should identify important boundary conditions in the evolution of a signal; therefore, these models should be based on signal characteristics. With the use of such a model for sustained continuant tones, the steady-state segment was more salient than the attack. These findings contradicted previous research, which used questionable operational definitions for signal partitioning. For the next set of experiments, instrument tones were analyzed by phase vocoder, and stimuli were created by additive synthesis. Edits and combinations of edits controlled global amplitude envelope, spectral frequencies, and relative spectral amplitudes. Perceptual measurements were made with distance estimation, Verbal Attribute Magnitude Estimation, and similarity scaling. Results indicated that the primary acoustical attribute was the long-time-average spectral centroid. Spectral centroid is a measure of the center of energy distribution for spectral frequency components. Instruments with high values of spectral centroid (bowed strings) sound nasal while instruments with low spectral centroid (flute, clarinet) sound not nasal. The secondary acoustical attribute was spectral amplitude time variance. Predictably, time variance correlated highly with subject ratings of vibrato. The control of relative spectral amplitudes was more salient than the control of global envelope and spectral frequencies. Both amplitude phase relationships and time- variant spectral centroid were affected by the control of relative spectral amplitudes. Further experimentation is required to determine the salience of these features. The finding that instrumental vibrato is a manifestation of spectral amplitude time variance contradicts the common belief that vibrato is due to frequency (pitch) and intensity (loudness) modulation. This study suggests that vibrato is due to a periodic modulation in timbre. Future research should employ musical contexts.

  9. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  10. Mars global digital dune database and initial science results

    USGS Publications Warehouse

    Hayward, R.K.; Mullins, K.F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, A.; Christensen, P.R.

    2007-01-01

    A new Mars Global Digital Dune Database (MGD3) constructed using Thermal Emission Imaging System (THEMIS) infrared (IR) images provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields (area >1 kM2) that will help researchers to understand global climatic and sedimentary processes that have shaped the surface of Mars. MGD3 extends from 65??N to 65??S latitude and includes ???550 dune fields, covering ???70,000 km2, with an estimated total volume of ???3,600 km3. This area, when combined with polar dune estimates, suggests moderate- to large-size dune field coverage on Mars may total ???800,000 km2, ???6 times less than the total areal estimate of ???5,000,000 km2 for terrestrial dunes. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera. narrow-angle (MOC NA) images allow, we classify dunes and include dune slipface measurements, which are derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid (referred to as dune centroid azimuth) is calculated and can provide an accurate method for tracking dune migration within smooth-floored craters. These indicators of wind direction are compared to output from a general circulation model (GCM). Dune centroid azimuth values generally correlate to regional wind patterns. Slipface orientations are less well correlated, suggesting that local topographic effects may play a larger role in dune orientation than regional winds. Copyright 2007 by the American Geophysical Union.

  11. A Centroid Model of Species Distribution to Analyize Multi-directional Climate Change Finger Print in Avian Distribution in North America

    NASA Astrophysics Data System (ADS)

    Huang, Q.; Sauer, J.; Dubayah, R.

    2015-12-01

    Species distribution shift (or referred to as "fingerprint of climate change") as a primary mechanism to adapt climate change has been of great interest to ecologists and conservation practitioners. Recent meta-analyses have concluded that a wide range of animal and plant species are already shifting their distribution. However majority of the literature has focused on analyzing recent poleward and elevationally upward shift of species distribution. However if measured only in poleward shifts, the fingerprint of climate change will be underestimated significantly. In this study, we demonstrate a centroid model for range-wide analysis of distribution shifts using the North American Breeding Bird Survey. The centroid model is based on a hierarchical Bayesian framework which models population change within physiographic strata while accounting for several factors affecting species detectability. We used the centroid approach to examine large number of species permanent resident species in North America and evaluated the dreiction and magnitude of their shifting distribution. To examine the inferential ability of mean temperature and precipitation, we test a hypothesis based on climate velocity theory that species would be more likely to shift their distribution or would shift with greater magnitude in in regions with high climate change velocity. For species with significant shifts of distribution, we establish a precipitation model and a temperature model to explain their change of abundance at the strata level. Two models which are composed of mean and extreme climate indices respectively are also established to test the influences of changes in gradual and extreme climate trends.

  12. Linking biochemical perturbations in tissues of the African catfish to the presence of polycyclic aromatic hydrocarbons in Ovia River, Niger Delta region.

    PubMed

    Obinaju, Blessing E; Graf, Carola; Halsall, Crispin; Martin, Francis L

    2015-06-01

    Petroleum hydrocarbons including polycyclic aromatic hydrocarbons (PAHs) are a pollution issue in the Niger Delta region due to oil industry activities. PAHs were measured in the water column of the Ovia River with concentrations ranging from 0.1 to 1055.6 ng L(-1). Attenuated total reflection Fourier-transform infrared (ATR-FTIR) spectroscopy detected alterations in tissues of the African catfish (Heterobranchus bidorsalis) from the region showed varying degrees of statistically significant (P<0.0001, P<0.001, P<0.05) changes to absorption band areas and shifts in centroid positions of peaks. Alteration patterns were similar to those induced by benzo[a]pyrene in MCF-7 cells. These findings have potential health implications for resident local communities as H. bidorsalis constitutes a key nutritional source. The study provides supporting evidence for the sensitivity of infrared spectroscopy in environmental studies and supports their potential application in biomonitoring. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Modeling and design of a two-axis elliptical notch flexure hinge

    NASA Astrophysics Data System (ADS)

    Wu, Jianwei; Zhang, Yin; Lu, Yunfeng; Wen, Zhongpu; Bin, Deer; Tan, Jiubin

    2018-04-01

    As an important part of the joule balance system, the two-axis elliptical notch flexure hinge (TENFH) which typically consists of two single-axis elliptical notch flexure hinges was studied. First, a 6 degrees of freedom (6-DOF) compliance model was established based on the coordinate transformation method. In addition, the maximum stress of the TENFH was derived. The compliance and maximum stress model was verified using finite element analysis simulation. To decouple the attitude of the suspended coil system and reduce the offset between the centroid of the suspended coil mechanism and the mass comparator in the joule balance system, a new mechanical structure of TENFH was designed based on the compliance model and stress model proposed in this paper. The maximum rotation range is up to 10°, and the axial load is more than 5 kg, which meets the requirements of the system. The compliance model was also verified by deformation experimentation with the designed TENFH.

  14. The Use Of Videography For Three-Dimensional Motion Analysis

    NASA Astrophysics Data System (ADS)

    Hawkins, D. A.; Hawthorne, D. L.; DeLozier, G. S.; Campbell, K. R.; Grabiner, M. D.

    1988-02-01

    Special video path editing capabilities with custom hardware and software, have been developed for use in conjunction with existing video acquisition hardware and firmware. This system has simplified the task of quantifying the kinematics of human movement. A set of retro-reflective markers are secured to a subject performing a given task (i.e. walking, throwing, swinging a golf club, etc.). Multiple cameras, a video processor, and a computer work station collect video data while the task is performed. Software has been developed to edit video files, create centroid data, and identify marker paths. Multi-camera path files are combined to form a 3D path file using the DLT method of cinematography. A separate program converts the 3D path file into kinematic data by creating a set of local coordinate axes and performing a series of coordinate transformations from one local system to the next. The kinematic data is then displayed for appropriate review and/or comparison.

  15. Towards automated human gait disease classification using phase space representation of intrinsic mode functions

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Patra, Sayantani; Pratiher, Souvik

    2017-06-01

    A novel analytical methodology for segregating healthy and neurological disorders from gait patterns is proposed by employing a set of oscillating components called intrinsic mode functions (IMF's). These IMF's are generated by the Empirical Mode Decomposition of the gait time series and the Hilbert transformed analytic signal representation forms the complex plane trace of the elliptical shaped analytic IMFs. The area measure and the relative change in the centroid position of the polygon formed by the Convex Hull of these analytic IMF's are taken as the discriminative features. Classification accuracy of 79.31% with Ensemble learning based Adaboost classifier validates the adequacy of the proposed methodology for a computer aided diagnostic (CAD) system for gait pattern identification. Also, the efficacy of several potential biomarkers like Bandwidth of Amplitude Modulation and Frequency Modulation IMF's and it's Mean Frequency from the Fourier-Bessel expansion from each of these analytic IMF's has been discussed for its potency in diagnosis of gait pattern identification and classification.

  16. Research on Robot Pose Control Technology Based on Kinematics Analysis Model

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Xu, Lijuan

    2018-01-01

    In order to improve the attitude stability of the robot, proposes an attitude control method of robot based on kinematics analysis model, solve the robot walking posture transformation, grasping and controlling the motion planning problem of robot kinematics. In Cartesian space analytical model, using three axis accelerometer, magnetometer and the three axis gyroscope for the combination of attitude measurement, the gyroscope data from Calman filter, using the four element method for robot attitude angle, according to the centroid of the moving parts of the robot corresponding to obtain stability inertia parameters, using random sampling RRT motion planning method, accurate operation to any position control of space robot, to ensure the end effector along a prescribed trajectory the implementation of attitude control. The accurate positioning of the experiment is taken using MT-R robot as the research object, the test robot. The simulation results show that the proposed method has better robustness, and higher positioning accuracy, and it improves the reliability and safety of robot operation.

  17. Analysis of a closed-kinematic chain robot manipulator

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Pooran, Farhad J.

    1988-01-01

    Presented are the research results from the research grant entitled: Active Control of Robot Manipulators, sponsored by the Goddard Space Flight Center (NASA) under grant number NAG-780. This report considers a class of robot manipulators based on the closed-kinematic chain mechanism (CKCM). This type of robot manipulators mainly consists of two platforms, one is stationary and the other moving, and they are coupled together through a number of in-parallel actuators. Using spatial geometry and homogeneous transformation, a closed-form solution is derived for the inverse kinematic problem of the six-degree-of-freedom manipulator, built to study robotic assembly in space. Iterative Newton Raphson method is employed to solve the forward kinematic problem. Finally, the equations of motion of the above manipulators are obtained by employing the Lagrangian method. Study of the manipulator dynamics is performed using computer simulation whose results show that the robot actuating forces are strongly dependent on the mass and centroid locations of the robot links.

  18. An improved algorithm of laser spot center detection in strong noise background

    NASA Astrophysics Data System (ADS)

    Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong

    2018-01-01

    Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.

  19. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  20. Fem Simulation of Triple Diffusive Natural Convection Along Inclined Plate in Porous Medium: Prescribed Surface Heat, Solute and Nanoparticles Flux

    NASA Astrophysics Data System (ADS)

    Goyal, M.; Goyal, R.; Bhargava, R.

    2017-12-01

    In this paper, triple diffusive natural convection under Darcy flow over an inclined plate embedded in a porous medium saturated with a binary base fluid containing nanoparticles and two salts is studied. The model used for the nanofluid is the one which incorporates the effects of Brownian motion and thermophoresis. In addition, the thermal energy equations include regular diffusion and cross-diffusion terms. The vertical surface has the heat, mass and nanoparticle fluxes each prescribed as a power law function of the distance along the wall. The boundary layer equations are transformed into a set of ordinary differential equations with the help of group theory transformations. A wide range of parameter values are chosen to bring out the effect of buoyancy ratio, regular Lewis number and modified Dufour parameters of both salts and nanofluid parameters with varying angle of inclinations. The effects of parameters on the velocity, temperature, solutal and nanoparticles volume fraction profiles, as well as on the important parameters of heat and mass transfer, i.e., the reduced Nusselt, regular and nanofluid Sherwood numbers, are discussed. Such problems find application in extrusion of metals, polymers and ceramics, production of plastic films, insulation of wires and liquid packaging.

  1. Least squares reconstruction of non-linear RF phase encoded MR data.

    PubMed

    Salajeghe, Somaie; Babyn, Paul; Sharp, Jonathan C; Sarty, Gordon E

    2016-09-01

    The numerical feasibility of reconstructing MRI signals generated by RF coils that produce B1 fields with a non-linearly varying spatial phase is explored. A global linear spatial phase variation of B1 is difficult to produce from current confined to RF coils. Here we use regularized least squares inversion, in place of the usual Fourier transform, to reconstruct signals generated in B1 fields with non-linear phase variation. RF encoded signals were simulated for three RF coil configurations: ideal linear, parallel conductors and, circular coil pairs. The simulated signals were reconstructed by Fourier transform and by regularized least squares. The Fourier reconstruction of simulated RF encoded signals from the parallel conductor coil set showed minor distortions over the reconstruction of signals from the ideal linear coil set but the Fourier reconstruction of signals from the circular coil set produced severe geometric distortion. Least squares inversion in all cases produced reconstruction errors comparable to the Fourier reconstruction of the simulated signal from the ideal linear coil set. MRI signals encoded in B1 fields with non-linearly varying spatial phase may be accurately reconstructed using regularized least squares thus pointing the way to the use of simple RF coil designs for RF encoded MRI. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  2. Application of multiattribute decision-making methods for the determination of relative significance factor of impact categories.

    PubMed

    Noh, Jaesung; Lee, Kun Mo

    2003-05-01

    A relative significance factor (f(i)) of an impact category is the external weight of the impact category. The objective of this study is to propose a systematic and easy-to-use method for the determination of f(i). Multiattribute decision-making (MADM) methods including the analytical hierarchy process (AHP), the rank-order centroid method, and the fuzzy method were evaluated for this purpose. The results and practical aspects of using the three methods are compared. Each method shows the same trend, with minor differences in the value of f(i). Thus, all three methods can be applied to the determination of f(i). The rank order centroid method reduces the number of pairwise comparisons by placing the alternatives in order, although it has inherent weakness over the fuzzy method in expressing the degree of vagueness associated with assigning weights to criteria and alternatives. The rank order centroid method is considered a practical method for the determination of f(i) because it is easier and simpler to use compared to the AHP and the fuzzy method.

  3. Kepler Fine Guidance Sensor Data

    NASA Technical Reports Server (NTRS)

    Van Cleve, Jeffrey; Campbell, Jennifer Roseanna

    2017-01-01

    The Kepler and K2 missions collected Fine Guidance Sensor (FGS) data in addition to the science data, as discussed in the Kepler Instrument Handbook (KIH, Van Cleve and Caldwell 2016). The FGS CCDs are frame transfer devices (KIH Table 7) located in the corners of the Kepler focal plane (KIH Figure 24), which are read out 10 times every second. The FGS data are being made available to the user community for scientific analysis as flux and centroid time series, along with a limited number of FGS full frame images which may be useful for constructing a World Coordinate System (WCS) or otherwise putting the time series data in context. This document will describe the data content and file format, and give example MATLAB scripts to read the time series. There are three file types delivered as the FGS data.1. Flux and Centroid (FLC) data: time series of star signal and centroid data. 2. Ancillary FGS Reference (AFR) data: catalog of information about the observed stars in the FLC data. 3. FGS Full-Frame Image (FGI) data: full-frame image snapshots of the FGS CCDs.

  4. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less

  5. Reducing Earth Topography Resolution for SMAP Mission Ground Tracks Using K-Means Clustering

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2013-01-01

    The K-means clustering algorithm is used to reduce Earth topography resolution for the SMAP mission ground tracks. As SMAP propagates in orbit, knowledge of the radar antenna footprints on Earth is required for the antenna misalignment calibration. Each antenna footprint contains a latitude and longitude location pair on the Earth surface. There are 400 pairs in one data set for the calibration model. It is computationally expensive to calculate corresponding Earth elevation for these data pairs. Thus, the antenna footprint resolution is reduced. Similar topographical data pairs are grouped together with the K-means clustering algorithm. The resolution is reduced to the mean of each topographical cluster called the cluster centroid. The corresponding Earth elevation for each cluster centroid is assigned to the entire group. Results show that 400 data points are reduced to 60 while still maintaining algorithm performance and computational efficiency. In this work, sensitivity analysis is also performed to show a trade-off between algorithm performance versus computational efficiency as the number of cluster centroids and algorithm iterations are increased.

  6. Radio structure effects on the optical and radio representations of the ICRF

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.; da Silva Neto, D. N.; Assafin, M.; Vieira Martins, R.

    Silva Neto et al. (2002) show that comparing the ICRF Ext.1 sources standard radio position (Ma et al. 1998) against their optical counterpart position (Zacharias et al. 1999, Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9±1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio stucture. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.

  7. JASMINE data analysis

    NASA Astrophysics Data System (ADS)

    Yamada, Yoshiyuki; Gouda, Naoteru; Yoshioka, Satoshi

    2015-08-01

    We are planning JASMINE (Japan Astrometric Satellite Mission for INfrared Exploration) as a series missions of Nano-JASMINE, Small-JASMINE, and JASMINE. Nano-JASMINE data analysis will be performed as a collaboration with Gaia data analysis team. We apply Gaia core processing software named AGIS as a Nano-JASMINE core solution. Applicability has been confirmed by D. Michalik and Gaia DPAC team. Converting telemetry data to AGIS input is a JASMINE team's task. It includes centroid caoculatoin of the stellar image. Accuracy of Gaia is two-order better than that of Nano-JASMINE. But there are only two astrometric satellite missions with CCD detector for global astrometry. So, Nano-JASMINE will have role of calibrating Gaia data. Bright star centroiding is the most important science target.Small-JASMINE has completely different observation strategy. It will observe step stair observation with about a million observations for individual star. Sub milli arcsec centroid errors of individual steallar images will be reduced by two order and getting 10 micro arcsecond astrometric accuracy by applying square root N law of million observations. Various systematic noise should be estimated, modelled, and subtracted. Some statistical study will be shown in this poster.

  8. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-04-01

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  9. Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.

    PubMed

    Gao, J

    2016-01-01

    Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.

  10. Effect of the 1997 El Niño on the distribution of upper tropospheric cirrus

    NASA Astrophysics Data System (ADS)

    Massie, Steven; Lowe, Paul; Tie, Xuexi; Hervig, Mark; Thomas, Gary; Russell, James

    2000-09-01

    Geographical distributions of Halogen Occultation Experiment (HALOE) aerosol extinction data for 1993-1998 are analyzed in the troposphere and stratosphere at pressures between 121 and 46 hPa. The El Niño conditions of 1997 increased upper tropospheric cirrus over the mid-Pacific and decreased cirrus over Indonesia. Longitudinal centroids of cirrus in the Pacific and over Indonesia shifted eastward by 25° in the troposphere in 1997. Longitudinal centroids of aerosol in the lower stratosphere do not exhibit longitudinal shifts in 1997, indicating that the effects of El Niño upon equatorial particle distributions are confined to the troposphere. The correlation of the longitudinal centroids of outgoing longwave radiation and HALOE extinction confirms the spatial relationship between deep convective clouds and upper tropospheric cirrus. The number of cirrus events observed each year in 1993-1998 in the upper troposphere are quite similar for the region from the Indian Ocean to the mid-Pacific (30°S to 30°N, 50° to 240°E).

  11. THE REGULAR FOURIER MATRICES AND NONUNIFORM FAST-FOURIER TRANSFORMS. (R825225)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  12. More on approximations of Poisson probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C

    1980-05-01

    Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less

  13. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications

    PubMed Central

    Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.

    2018-01-01

    Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918

  14. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    PubMed

    Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A

    2018-04-01

    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.

  15. Recognizing ovarian cancer from co-registered ultrasound and photoacoustic images

    NASA Astrophysics Data System (ADS)

    Alqasemi, Umar; Kumavor, Patrick; Aguirre, Andres; Zhu, Quing

    2013-03-01

    Unique features in co-registered ultrasound and photoacoustic images of ex vivo ovarian tissue are introduced, along with the hypotheses of how these features may relate to the physiology of tumors. The images are compressed with wavelet transform, after which the mean Radon transform of the photoacoustic image is computed and fitted with a Gaussian function to find the centroid of the suspicious area for shift-invariant recognition process. In the next step, 24 features are extracted from a training set of images by several methods; including features from the Fourier domain, image statistics, and the outputs of different composite filters constructed from the joint frequency response of different cancerous images. The features were chosen from more than 400 training images obtained from 33 ex vivo ovaries of 24 patients, and used to train a support vector machine (SVM) structure. The SVM classifier was able to exclusively separate the cancerous from the non-cancerous cases with 100% sensitivity and specificity. At the end, the classifier was used to test 95 new images, obtained from 37 ovaries of 20 additional patients. The SVM classifier achieved 76.92% sensitivity and 95.12% specificity. Furthermore, if we assume that recognizing one image as a cancerous case is sufficient to consider the ovary as malignant, then the SVM classifier achieves 100% sensitivity and 87.88% specificity.

  16. An Adaptive Moving Target Imaging Method for Bistatic Forward-Looking SAR Using Keystone Transform and Optimization NLCS.

    PubMed

    Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu

    2017-01-23

    Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.

  17. New mechanism of structuring associated with the quasi-merohedral twinning by an example of Ca{sub 1–x}La{sub x}F{sub 2+x} ordered solid solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maksimov, S. K., E-mail: maksimov-sk@comtv.ru; Maksimov, K. S., E-mail: kuros@rambler.ru; Sukhov, N. D.

    Merohedry is considered an inseparable property of atomic structures, and uses for the refinement of structural data in a process of correct determination of structure of compounds. Transformation of faulty structures stimulated by decreasing of systemic cumulative energy leads to generation of merohedral twinning type. Ordering is accompanied by origin of antiphase domains. If ordering belongs to the CuAu type, it is accompanied by tetragonal distortions along different (100) directions. If a crystal consists of mosaic of nanodimensional antiphase domains, the conjugation of antiphase domains with different tetragonality leads to monoclinic distortions, at that, conjugated domains are distorted mirrorly. Similarmore » system undergoes further transformation by means of quasi-merohedral twinning. As a result of quasi-merohedry, straight-lines of lattices with different monoclinic distortions are transformed into coherent lattice broken-lines providing minimization of the cumulative energy. Structuring is controlled by regularities of the self-organization. However stochasticity of ordering predetermines the origin areas where few domains with different tetragonality contact which leads to the origin of faulty fields braking regular passage of structuring. Resulting crystal has been found structurally non-uniform, furthermore structural non-uniformity permits identifying elements and stages of a process. However there is no precondition preventing arising the origin of homogenous states. Effect has been revealed in Ca{sub 1–x}La{sub x}F{sub 2+x} solid solution, but it can be expected that distortions of regular alternation of ions similar to antiphase domains can be obtained in non-equilibrium conditions in compounds and similar effect of the quasi-merohedry can falsify results of structural analysis.« less

  18. 2-(1,2,3,4-Tetra-hydro-1-naphth-yl)imidazolium chloride monohydrate.

    PubMed

    Bruni, Bruno; Bartolucci, Gianluca; Ciattini, Samuele; Coran, Silvia

    2010-08-18

    In the title compound, C(13)H(15)N(2) (+)·Cl(-)·H(2)O, the ions and water mol-ecules are -connected by N-H⋯Cl, O-H⋯Cl, NH⋯Cl⋯HO, NH⋯Cl⋯HN and OH⋯Cl⋯HO inter-actions, forming discrete D(2) and D(2) (1)(3) chains, C(2) (1)(6) chains and R(4) (2)(8) rings, leading to a neutral two-dimensional network. The crystal structure is further stabilized by π-π stacking inter-actions [centroid-centroid distance = 3.652 (11) Å].

  19. A new efficient mixture screening design for optimization of media.

    PubMed

    Rispoli, Fred; Shah, Vishal

    2009-01-01

    Screening ingredients for the optimization of media is an important first step to reduce the many potential ingredients down to the vital few components. In this study, we propose a new method of screening for mixture experiments called the centroid screening design. Comparison of the proposed design with Plackett-Burman, fractional factorial, simplex lattice design, and modified mixture design shows that the centroid screening design is the most efficient of all the designs in terms of the small number of experimental runs needed and for detecting high-order interaction among ingredients. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  20. An Element-Based Concurrent Partitioner for Unstructured Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Ding, Hong Q.; Ferraro, Robert D.

    1996-01-01

    A concurrent partitioner for partitioning unstructured finite element meshes on distributed memory architectures is developed. The partitioner uses an element-based partitioning strategy. Its main advantage over the more conventional node-based partitioning strategy is its modular programming approach to the development of parallel applications. The partitioner first partitions element centroids using a recursive inertial bisection algorithm. Elements and nodes then migrate according to the partitioned centroids, using a data request communication template for unpredictable incoming messages. Our scalable implementation is contrasted to a non-scalable implementation which is a straightforward parallelization of a sequential partitioner.

  1. (Carbonato-κO,O')bis-(1,10-phenan-throline-κN,N')cobalt(III) nitrate monohydrate.

    PubMed

    Andaç, Omer; Yolcu, Zuhal; Büyükgüngör, Orhan

    2009-12-12

    The crystal structure of the title compound, [Co(CO(3))(C(12)H(8)N(2))(2)]NO(3)·H(2)O, consists of Co(III) complex cations, nitrate anions and uncoordinated water mol-ecules. The Co(III) cation is chelated by a carbonate anion and two phenanthroline ligands in a distorted octa-hedral coordination geometry. A three-dimensional supra-molecular structure is formed by O-H⋯O and C-H⋯O hydrogen bonding, C-H⋯π and aromatic π-π stacking [centroid-centroid distance = 3.995 (1)Å] inter-actions.

  2. Shallow conduit system at Kilauea Volcano, Hawaii, revealed by seismic signals associated with degassing bursts

    USGS Publications Warehouse

    Chouet, Bernard; Dawson, Phillip

    2011-01-01

    Eruptive activity at the summit of Kilauea Volcano, Hawaii, beginning in March, 2008 and continuing to the present time is characterized by episodic explosive bursts of gas and ash from a vent within Halemaumau Pit Crater. These bursts are accompanied by seismic signals that are well recorded by a broadband network deployed in the summit caldera. We investigate in detail the dimensions and oscillation modes of the source of a representative burst in the 1−10 s band. An extended source is realized by a set of point sources distributed on a grid surrounding the source centroid, where the centroid position and source geometry are fixed from previous modeling of very-long-period (VLP) data in the 10–50 s band. The source time histories of all point sources are obtained simultaneously through waveform inversion carried out in the frequency domain. Short-scale noisy fluctuations of the source time histories between adjacent sources are suppressed with a smoothing constraint, whose strength is determined through a minimization of the Akaike Bayesian Information Criterion (ABIC). Waveform inversions carried out for homogeneous and heterogeneous velocity structures both image a dominant source component in the form of an east trending dike with dimensions of 2.9 × 2.9 km. The dike extends ∼2 km west and ∼0.9 km east of the VLP centroid and spans the depth range 0.2–3.1 km. The source model for a homogeneous velocity structure suggests the dike is hinged at the source centroid where it bends from a strike E 27°N with northern dip of 85° west of the centroid, to a strike E 7°N with northern dip of 80° east of the centroid. The oscillating behavior of the dike is dominated by simple harmonic modes with frequencies ∼0.2 Hz and ∼0.5 Hz, representing the fundamental mode ν11 and first degenerate mode ν12 = ν21 of the dike. Although not strongly supported by data in the 1–10 s band, a north striking dike segment is required for enhanced compatibility with the model elaborated in the 10–50 s band. This dike provides connectivity between the east trending dike and the new vent within Halemaumau Pit Crater. Waveform inversions with a dual-dike model suggest dimensions of 0.7 × 0.7 km to 2.6 × 2.6 km for this segment. Further elaboration of the complex dike system under Halemaumau does not appear to be feasible with presently available data.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senesi, Andrew; Lee, Byeongdu

    Herein, a general method to calculate the scattering functions of polyhedra, including both regular and semi-regular polyhedra, is presented. These calculations may be achieved by breaking a polyhedron into sets of congruent pieces, thereby reducing computation time by taking advantage of Fourier transforms and inversion symmetry. Each piece belonging to a set or subunit can be generated by either rotation or translation. Further, general strategies to compute truncated, concave and stellated polyhedra are provided. Using this method, the asymptotic behaviors of the polyhedral scattering functions are compared with that of a sphere. It is shown that, for a regular polyhedron,more » the form factor oscillation at highqis correlated with the face-to-face distance. In addition, polydispersity affects the Porod constant. The ideas presented herein will be important for the characterization of nanomaterials using small-angle scattering.« less

  4. Audio-Visual Biofeedback Does Not Improve the Reliability of Target Delineation Using Maximum Intensity Projection in 4-Dimensional Computed Tomography Radiation Therapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Wei, E-mail: wlu@umm.edu; Neuner, Geoffrey A.; George, Rohini

    2014-01-01

    Purpose: To investigate whether coaching patients' breathing would improve the match between ITV{sub MIP} (internal target volume generated by contouring in the maximum intensity projection scan) and ITV{sub 10} (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer systemmore » (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV{sub 10} and ITV{sub MIP}. The match between ITV{sub MIP} and ITV{sub 10} was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV{sub MIP} improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV{sub MIP} and ITV{sub 10} over FB. On average, ITV{sub MIP} underestimated ITV{sub 10} by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV{sub MIP} did not correct for the mismatch between ITV{sub MIP} and ITV{sub 10}. Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV{sub MIP} and ITV{sub 10}. In general, ITV{sub MIP} should be limited to lung cancers, and modification of ITV{sub MIP} in each phase of the 4DCT data set is recommended.« less

  5. MOOCs and Democratic Education

    ERIC Educational Resources Information Center

    Carver, Leland; Harrison, Laura M.

    2013-01-01

    Massive Open Online Courses (MOOCs) have entered the world of online education with a splash, and their potential to transform higher education is being widely hailed. Indeed, many involved in the creation, implementation, and facilitation of this new format regularly speak in terms of "revolution" and massive "disruption." If…

  6. Schwinger-variational-principle theory of collisions in the presence of multiple potentials

    NASA Astrophysics Data System (ADS)

    Robicheaux, F.; Giannakeas, P.; Greene, Chris H.

    2015-08-01

    A theoretical method for treating collisions in the presence of multiple potentials is developed by employing the Schwinger variational principle. The current treatment agrees with the local (regularized) frame transformation theory and extends its capabilities. Specifically, the Schwinger variational approach gives results without the divergences that need to be regularized in other methods. Furthermore, it provides a framework to identify the origin of these singularities and possibly improve the local frame transformation. We have used the method to obtain the scattering parameters for different confining potentials symmetric in x ,y . The method is also used to treat photodetachment processes in the presence of various confining potentials, thereby highlighting effects of the infinitely many closed channels. Two general features predicted are the vanishing of the total photoabsorption probability at every channel threshold and the occurrence of resonances below the channel thresholds for negative scattering lengths. In addition, the case of negative-ion photodetachment in the presence of uniform magnetic fields is also considered where unique features emerge at large scattering lengths.

  7. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.

    1984-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295

  8. A cascade method for TFT-LCD defect detection

    NASA Astrophysics Data System (ADS)

    Yi, Songsong; Wu, Xiaojun; Yu, Zhiyang; Mo, Zhuoya

    2017-07-01

    In this paper, we propose a novel cascade detection algorithm which focuses on point and line defects on TFT-LCD. At the first step of the algorithm, we use the gray level difference of su-bimage to segment the abnormal area. The second step is based on phase only transform (POT) which corresponds to the Discrete Fourier Transform (DFT), normalized by the magnitude. It can remove regularities like texture and noise. After that, we improve the method of setting regions of interest (ROI) with the method of edge segmentation and polar transformation. The algorithm has outstanding performance in both computation speed and accuracy. It can solve most of the defect detections including dark point, light point, dark line, etc.

  9. Directionality fields generated by a local Hilbert transform

    NASA Astrophysics Data System (ADS)

    Ahmed, W. W.; Herrero, R.; Botey, M.; Hayran, Z.; Kurt, H.; Staliunas, K.

    2018-03-01

    We propose an approach based on a local Hilbert transform to design non-Hermitian potentials generating arbitrary vector fields of directionality, p ⃗(r ⃗) , with desired shapes and topologies. We derive a local Hilbert transform to systematically build such potentials by modifying background potentials (being either regular or random, extended or localized). We explore particular directionality fields, for instance in the form of a focus to create sinks for probe fields (which could help to increase absorption at the sink), or to generate vortices in the probe fields. Physically, the proposed directionality fields provide a flexible mechanism for dynamical shaping and precise control over probe fields leading to novel effects in wave dynamics.

  10. Imaging ultrasonic dispersive guided wave energy in long bones using linear radon transform.

    PubMed

    Tran, Tho N H T; Nguyen, Kim-Cuong T; Sacchi, Mauricio D; Le, Lawrence H

    2014-11-01

    Multichannel analysis of dispersive ultrasonic energy requires a reliable mapping of the data from the time-distance (t-x) domain to the frequency-wavenumber (f-k) or frequency-phase velocity (f-c) domain. The mapping is usually performed with the classic 2-D Fourier transform (FT) with a subsequent substitution and interpolation via c = 2πf/k. The extracted dispersion trajectories of the guided modes lack the resolution in the transformed plane to discriminate wave modes. The resolving power associated with the FT is closely linked to the aperture of the recorded data. Here, we present a linear Radon transform (RT) to image the dispersive energies of the recorded ultrasound wave fields. The RT is posed as an inverse problem, which allows implementation of the regularization strategy to enhance the focusing power. We choose a Cauchy regularization for the high-resolution RT. Three forms of Radon transform: adjoint, damped least-squares, and high-resolution are described, and are compared with respect to robustness using simulated and cervine bone data. The RT also depends on the data aperture, but not as severely as does the FT. With the RT, the resolution of the dispersion panel could be improved up to around 300% over that of the FT. Among the Radon solutions, the high-resolution RT delineated the guided wave energy with much better imaging resolution (at least 110%) than the other two forms. The Radon operator can also accommodate unevenly spaced records. The results of the study suggest that the high-resolution RT is a valuable imaging tool to extract dispersive guided wave energies under limited aperture. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  11. Exposure Time Optimization for Highly Dynamic Star Trackers

    PubMed Central

    Wei, Xinguo; Tan, Wei; Li, Jian; Zhang, Guangjun

    2014-01-01

    Under highly dynamic conditions, the star-spots on the image sensor of a star tracker move across many pixels during the exposure time, which will reduce star detection sensitivity and increase star location errors. However, this kind of effect can be compensated well by setting an appropriate exposure time. This paper focuses on how exposure time affects the star tracker under highly dynamic conditions and how to determine the most appropriate exposure time for this case. Firstly, the effect of exposure time on star detection sensitivity is analyzed by establishing the dynamic star-spot imaging model. Then the star location error is deduced based on the error analysis of the sub-pixel centroiding algorithm. Combining these analyses, the effect of exposure time on attitude accuracy is finally determined. Some simulations are carried out to validate these effects, and the results show that there are different optimal exposure times for different angular velocities of a star tracker with a given configuration. In addition, the results of night sky experiments using a real star tracker agree with the simulation results. The summarized regularities in this paper should prove helpful in the system design and dynamic performance evaluation of the highly dynamic star trackers. PMID:24618776

  12. Factors related to the joint probability of flooding on paired streams

    USGS Publications Warehouse

    Koltun, G.F.; Sherwood, J.M.

    1998-01-01

    The factors related to the joint probabilty of flooding on paired streams were investigated and quantified to provide information to aid in the design of hydraulic structures where the joint probabilty of flooding is an element of the design criteria. Stream pairs were considered to have flooded jointly at the design-year flood threshold (corresponding to the 2-, 10-, 25-, or 50-year instantaneous peak streamflow) if peak streamflows at both streams in the pair were observed or predicted to have equaled or exceeded the threshold on a given calendar day. Daily mean streamflow data were used as a substitute for instantaneous peak streamflow data to determine which flood thresholds were equaled or exceeded on any given day. Instantaneous peak streamflow data, when available, were used preferentially to assess flood-threshold exceedance. Daily mean streamflow data for each stream were paired with concurrent daily mean streamflow data at the other streams. Observed probabilities of joint flooding, determined for the 2-, 10-, 25-, and 50-year flood thresholds, were computed as the ratios of the total number of days when streamflows at both streams concurrently equaled or exceeded their flood thresholds (events) to the total number of days where streamflows at either stream equaled or exceeded its flood threshold (trials). A combination of correlation analyses, graphical analyses, and logistic-regression analyses were used to identify and quantify factors associated with the observed probabilities of joint flooding (event-trial ratios). The analyses indicated that the distance between drainage area centroids, the ratio of the smaller to larger drainage area, the mean drainage area, and the centroid angle adjusted 30 degrees were the basin characteristics most closely associated with the joint probabilty of flooding on paired streams in Ohio. In general, the analyses indicated that the joint probabilty of flooding decreases with an increase in centroid distance and increases with increases in drainage area ratio, mean drainage area, and centroid angle adjusted 30 degrees. Logistic-regression equations were developed, which can be used to estimate the probability that streamflows at two streams jointly equal or exceed the 2-year flood threshold given that the streamflow at one of the two streams equals or exceeds the 2-year flood threshold. The logistic-regression equations are applicable to stream pairs in Ohio (and border areas of adjacent states) that are unregulated, free of significant urban influences, and have characteristics similar to those of the 304 gaged stream pairs used in the logistic-regression analyses. Contingency tables were constructed and analyzed to provide information about the bivariate distribution of floods on paired streams. The contingency tables showed that the percentage of trials in which both streams in the pair concurrently flood at identical recurrence-interval ranges generally increased as centroid distances decreased and was greatest for stream pairs with adjusted centroid angles greater than or equal to 60 degrees and drainage area ratios greater than or equal to 0.01. Also, as centroid distance increased, streamflow at one stream in the pair was more likely to be in a less than 2-year recurrence-interval range when streamflow at the second stream was in a 2-year or greater recurrence-interval range.

  13. Geophysical Evidence for Magma Intrusion across the Non-Transform Offset between the Famous and North Famous segments of The Mid-Atlantic Ridge

    NASA Astrophysics Data System (ADS)

    Giusti, M.; Dziak, R. P.; Maia, M.; Perrot, J.; Sukhovich, A.

    2017-12-01

    In August of 2010 an unusually large earthquake sequence of >700 events occurred at the Famous and North Famous segments (36.5-37°N) of the Mid-Atlantic Ridge (MAR), recorded by an array of five hydrophones moored on the MAR flanks. The swarm extended spatially >70 km across the two segments. The non-transform offset (NTO) separating the two segements, which is thought to act as strucutural barrier, did not appear to impede or block the earthquake's spatial distribution. Broadband acoustic energy (1-30 Hz) was also observed and accompanied the onset of the swarm, lasting >20 hours. A total of 18 earthquakes from the swarm were detected teleseismically, four had Centroid-Moment Tensor (CMT) solutions derived. The CMT solutions indicated three normal faulting events, and one non-double couple (explosion) event. The spatio-temporal distribution of the seismicity and broadband energy show evidence of two magma dike intrusions at the North Famous segment, with one intrusion crossing the NTO. This is the first evidence for an intrusion event detected on the MAR south of the Azores since the 2001 Lucky Strike intrusion. Gravimetric data were required to identify whether or not the Famous area is indeed comprised of two segments down to the level of the upper mantle. A high resolution gravity anomaly map of the two segments has been realized, based on a two-dimensional polygons model (Chapman, 1979) and will be compared to gravimetric data originated from SUDACORES experiment (1998, Atalante ship, IFREMER research team). Combined with the earthquake observations, this gravity anomaly map should provide a better understanding the geodynamic processes of this non-transform offset and of the deep magmatic system driving the August 2010 swarm.

  14. More Zernike modes' open-loop measurement in the sub-aperture of the Shack-Hartmann wavefront sensor.

    PubMed

    Zhu, Zhaoyi; Mu, Quanquan; Li, Dayu; Yang, Chengliang; Cao, Zhaoliang; Hu, Lifa; Xuan, Li

    2016-10-17

    The centroid-based Shack-Hartmann wavefront sensor (SHWFS) treats the sampled wavefronts in the sub-apertures as planes, and the slopes of the sub-wavefronts are used to reconstruct the whole pupil wavefront. The problem is that the centroid method may fail to sense the high-order modes for strong turbulences, decreasing the precision of the whole pupil wavefront reconstruction. To solve this problem, we propose a sub-wavefront estimation method for SHWFS based on the focal plane sensing technique, by which more Zernike modes than the two slopes can be sensed in each sub-aperture. In this paper, the effects on the sub-wavefront estimation method of the related parameters, such as the spot size, the phase offset with its set amplitude and the pixels number in each sub-aperture, are analyzed and these parameters are optimized to achieve high efficiency. After the optimization, open-loop measurement is realized. For the sub-wavefront sensing, we achieve a large linearity range of 3.0 rad RMS for Zernike modes Z2 and Z3, and 2.0 rad RMS for Zernike modes Z4 to Z6 when the pixel number does not exceed 8 × 8 in each sub-aperture. The whole pupil wavefront reconstruction with the modified SHWFS is realized to analyze the improvements brought by the optimized sub-wavefront estimation method. Sixty-five Zernike modes can be reconstructed with a modified SHWFS containing only 7 × 7 sub-apertures, which could reconstruct only 35 modes by the centroid method, and the mean RMS errors of the residual phases are less than 0.2 rad2, which is lower than the 0.35 rad2 by the centroid method.

  15. Macro-level safety analysis of pedestrian crashes in Shanghai, China.

    PubMed

    Wang, Xuesong; Yang, Junguang; Lee, Chris; Ji, Zhuoran; You, Shikai

    2016-11-01

    Pedestrian safety has become one of the most important issues in the field of traffic safety. This study aims at investigating the association between pedestrian crash frequency and various predictor variables including roadway, socio-economic, and land-use features. The relationships were modeled using the data from 263 Traffic Analysis Zones (TAZs) within the urban area of Shanghai - the largest city in China. Since spatial correlation exists among the zonal-level data, Bayesian Conditional Autoregressive (CAR) models with seven different spatial weight features (i.e. (a) 0-1 first order, adjacency-based, (b) common boundary-length-based, (c) geometric centroid-distance-based, (d) crash-weighted centroid-distance-based, (e) land use type, adjacency-based, (f) land use intensity, adjacency-based, and (g) geometric centroid-distance-order) were developed to characterize the spatial correlations among TAZs. Model results indicated that the geometric centroid-distance-order spatial weight feature, which was introduced in macro-level safety analysis for the first time, outperformed all the other spatial weight features. Population was used as the surrogate for pedestrian exposure, and had a positive effect on pedestrian crashes. Other significant factors included length of major arterials, length of minor arterials, road density, average intersection spacing, percentage of 3-legged intersections, and area of TAZ. Pedestrian crashes were higher in TAZs with medium land use intensity than in TAZs with low and high land use intensity. Thus, higher priority should be given to TAZs with medium land use intensity to improve pedestrian safety. Overall, these findings can help transportation planners and managers understand the characteristics of pedestrian crashes and improve pedestrian safety. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Discrimination of different sub-basins on Tajo River based on water influence factor

    NASA Astrophysics Data System (ADS)

    Bermudez, R.; Gascó, J. M.; Tarquis, A. M.; Saa-Requejo, A.

    2009-04-01

    Numeric taxonomy has been applied to classify Tajo basin water (Spain) till Portugal border. Several stations, a total of 52, that estimate 15 water variables have been used in this study. The different groups have been obtained applying a Euclidean distance among stations (distance classification) and a Euclidean distance between each station and the centroid estimated among them (centroid classification), varying the number of parameters and with or without variable typification. In order to compare the classification a log-log relation has been established, between number of groups created and distances, to select the best one. It has been observed that centroid classification is more appropriate following in a more logic way the natural constrictions than the minimum distance among stations. Variable typification doesn't improve the classification except when the centroid method is applied. Taking in consideration the ions and the sum of them as variables, the classification improved. Stations are grouped based on electric conductivity (CE), total anions (TA), total cations (TC) and ions ratio (Na/Ca and Mg/Ca). For a given classification and comparing the different groups created a certain variation in ions concentration and ions ratio are observed. However, the variation in each ion among groups is different depending on the case. For the last group, regardless the classification, the increase in all ions is general. Comparing the dendrograms, and groups that originated, Tajo river basin can be sub dived in five sub-basins differentiated by the main influence on water: 1. With a higher ombrogenic influence (rain fed). 2. With ombrogenic and pedogenic influence (rain and groundwater fed). 3. With pedogenic influence. 4. With lithogenic influence (geological bedrock). 5. With a higher ombrogenic and lithogenic influence added.

  17. Cervicothoracic Lordosis Can Influence Outcome After Posterior Cervical Spine Surgery.

    PubMed

    Brasil, Albert Vincent Berthier; Fruett da Costa, Pablo Ramon; Vial, Antonio Delacy Martini; Barcellos, Gabriel da Costa; Zauk, Eduardo Balverdu; Worm, Paulo Valdeci; Ferreira, Marcelo Paglioli; Ferreira, Nelson Pires

    2018-01-01

    Previous studies on the correlation between cervical sagittal balance with improvement in quality of life showed significant results only for parameters of the anterior translation of the cervical spine (such as C2-C7 SVA). We test whether a new parameter, cervicothoracic lordosis , can predict clinical success in this type of surgery. The focused group involved patients who underwent surgical treatment of cervical degenerative disk disease by the posterior approach, due to myelopathy, radiculopathy or a combination of both. Neurologic deficit was measured before and after surgery with the Nurick Scale, postoperative quality of life, physical and mental components of SF-36 and NDI. Cervicothoracic lordosis and various sagittal balance parameters were also measured. Cervicothoracic lordosis was defined as the angle between: a) the line between the centroid of C2 and the centroid of C7; b) the line between the centroid of C7 and the centroid of T6. Correlations between postoperative quality of life and sagittal parameters were calculated. Twenty-nine patients between 27 and 78 years old were evaluated. Surgery types were simple decompression (laminectomy or laminoforaminotomy) (3 patients), laminoplasty (4 patients) and laminectomy with fusion in 22 patients. Significant correlations were found for C2-C7 SVA and cervicothoracic lordosis. C2-C7 SVA correlated negatively with MCS (r=-0.445, p=0.026) and PCS (r=-0.405, p=0.045). Cervicothoracic lordosis correlated positively with MCS (r=0.554, p= 0.004) and PCS (r=0.462, p=0.020) and negatively with NDI (r=-0.416, p=0.031). The parameter cervicothoracic lordosis correlates with improvement of quality life after surgery for cervical degenerative disk disease by the posterior approach.

  18. Connecting optical and X-ray tracers of galaxy cluster relaxation

    NASA Astrophysics Data System (ADS)

    Roberts, Ian D.; Parker, Laura C.; Hlavacek-Larrondo, Julie

    2018-04-01

    Substantial effort has been devoted in determining the ideal proxy for quantifying the morphology of the hot intracluster medium in clusters of galaxies. These proxies, based on X-ray emission, typically require expensive, high-quality X-ray observations making them difficult to apply to large surveys of groups and clusters. Here, we compare optical relaxation proxies with X-ray asymmetries and centroid shifts for a sample of Sloan Digital Sky Survey clusters with high-quality, archival X-ray data from Chandra and XMM-Newton. The three optical relaxation measures considered are the shape of the member-galaxy projected velocity distribution - measured by the Anderson-Darling (AD) statistic, the stellar mass gap between the most-massive and second-most-massive cluster galaxy, and the offset between the most-massive galaxy (MMG) position and the luminosity-weighted cluster centre. The AD statistic and stellar mass gap correlate significantly with X-ray relaxation proxies, with the AD statistic being the stronger correlator. Conversely, we find no evidence for a correlation between X-ray asymmetry or centroid shift and the MMG offset. High-mass clusters (Mhalo > 1014.5 M⊙) in this sample have X-ray asymmetries, centroid shifts, and Anderson-Darling statistics which are systematically larger than for low-mass systems. Finally, considering the dichotomy of Gaussian and non-Gaussian clusters (measured by the AD test), we show that the probability of being a non-Gaussian cluster correlates significantly with X-ray asymmetry but only shows a marginal correlation with centroid shift. These results confirm the shape of the radial velocity distribution as a useful proxy for cluster relaxation, which can then be applied to large redshift surveys lacking extensive X-ray coverage.

  19. Crystal structure of a looped-chain CoII coordination polymer: catena-poly[[bis-(nitrato-κO)cobalt(II)]bis-[μ-bis-(pyridin-3-ylmeth-yl)sulfane-κ2N:N'

    PubMed

    Moon, Suk-Hee; Seo, Joobeom; Park, Ki-Min

    2017-11-01

    The asymmetric unit of the title compound, [Co(NO 3 ) 2 (C 12 H 12 N 2 S) 2 ] n , contains a bis-(pyridin-3-ylmeth-yl)sulfane ( L ) ligand, an NO 3 - anion and half a Co II cation, which lies on an inversion centre. The Co II cation is six-coordinated, being bound to four pyridine N atoms from four symmetry-related L ligands. The remaining coordination sites are occupied by two O atoms from two symmetry-related nitrate anions in a monodentate manner. Thus, the Co II centre adopts a distorted octa-hedral geometry. Two symmetry-related L ligands are connected by two symmetry-related Co II cations, forming a 20-membered cyclic dimer, in which the Co II atoms are separated by 10.2922 (7) Å. The cyclic dimers are connected to each other by sharing Co II atoms, giving rise to the formation of an infinite looped chain propagating along the [101] direction. Inter-molecular C-H⋯π (H⋯ring centroid = 2.89 Å) inter-actions between one pair of corresponding L ligands and C-H⋯O hydrogen bonds between the L ligands and the nitrate anions occur in the looped chain. In the crystal, adjacent looped chains are connected by inter-molecular π-π stacking inter-actions [centroid-to-centroid distance = 3.8859 (14) Å] and C-H⋯π hydrogen bonds (H⋯ring centroid = 2.65 Å), leading to the formation of layers parallel to (101). These layers are further connected through C-H⋯O hydrogen bonds between the layers, resulting in the formation of a three-dimensional supra-molecular architecture.

  20. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    PubMed

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  2. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  3. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  4. Dancing with Black Holes

    NASA Astrophysics Data System (ADS)

    Aarseth, S. J.

    2008-05-01

    We describe efforts over the last six years to implement regularization methods suitable for studying one or more interacting black holes by direct N-body simulations. Three different methods have been adapted to large-N systems: (i) Time-Transformed Leapfrog, (ii) Wheel-Spoke, and (iii) Algorithmic Regularization. These methods have been tried out with some success on GRAPE-type computers. Special emphasis has also been devoted to including post-Newtonian terms, with application to moderately massive black holes in stellar clusters. Some examples of simulations leading to coalescence by gravitational radiation will be presented to illustrate the practical usefulness of such methods.

  5. Biofilm-Growing Bacteria Involved in the Corrosion of Concrete Wastewater Pipes: Protocols for Comparative Metagenomic Analyses

    EPA Science Inventory

    Advances in high-throughput next-generation sequencing (NGS) technology for direct sequencing of environmental DNA (i.e. shotgun metagenomics) is transforming the field of microbiology. NGS technologies are now regularly being applied in comparative metagenomic studies, which pr...

  6. On the domain of the Nelson Hamiltonian

    NASA Astrophysics Data System (ADS)

    Griesemer, M.; Wünsch, A.

    2018-04-01

    The Nelson Hamiltonian is unitarily equivalent to a Hamiltonian defined through a closed, semibounded quadratic form, the unitary transformation being explicitly known and due to Gross. In this paper, we study the mapping properties of the Gross-transform in order to characterize the regularity properties of vectors in the form domain of the Nelson Hamiltonian. Since the operator domain is a subset of the form domain, our results apply to vectors in the domain of the Hamiltonian as well. This work is a continuation of our previous work on the Fröhlich Hamiltonian.

  7. A parallel VLSI architecture for a digital filter using a number theoretic transform

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1983-01-01

    The advantages of a very large scalee integration (VLSI) architecture for implementing a digital filter using fermat number transforms (FNT) are the following: It requires no multiplication. Only additions and bit rotations are needed. It alleviates the usual dynamic range limitation for long sequence FNT's. It utilizes the FNT and inverse FNT circuits 100% of the time. The lengths of the input data and filter sequences can be arbitraty and different. It is regular, simple, and expandable, and as a consequence suitable for VLSI implementation.

  8. Finding False Positives Planet Candidates Due To Background Eclipsing Binaries in K2

    NASA Astrophysics Data System (ADS)

    Mullally, Fergal; Thompson, Susan E.; Coughlin, Jeffrey; DAVE Team

    2016-06-01

    We adapt the difference image centroid approach, used for finding background eclipsing binaries, to vet K2 planet candidates. Difference image centroids were used with great success to vet planet candidates in the original Kepler mission, where the source of a transit could be identified by subtracting images of out-of-transit cadences from in-transit cadences. To account for K2's roll pattern, we reconstruct out-of-transit images from cadences that are nearby in both time and spacecraft roll angle. We describe the method and discuss some K2 planet candidates which this method suggests are false positives.

  9. 4,4'-Bipyridine-pyroglutamic acid (1/2).

    PubMed

    Arman, Hadi D; Kaulgud, Trupta; Tiekink, Edward R T

    2009-10-31

    In the title co-crystal, C(10)H(8)N(2)·2C(5)H(7)NO(3), the 4,4'-bipyridine mol-ecule [dihedral angle between the pyridine rings = 36.33 (11)°] accepts O-H⋯N hydrogen bonds from the two pyroglutamic (pga) acid mol-ecules. The pga mol-ecules at each end of the trimeric aggregate self-associate via centrosymmetric eight-membered amide {⋯HNCO}(2) synthons, so that the crystal structure comprises one-dimensional supra-molecular chains propagating in [13]. C-H⋯O and π-π stacking inter-actions [centroid-centroid separation = 3.590 (2) Å] consolidate the structure.

  10. A centroid molecular dynamics study of liquid para-hydrogen and ortho-deuterium.

    PubMed

    Hone, Tyler D; Voth, Gregory A

    2004-10-01

    Centroid molecular dynamics (CMD) is applied to the study of collective and single-particle dynamics in liquid para-hydrogen at two state points and liquid ortho-deuterium at one state point. The CMD results are compared with the results of classical molecular dynamics, quantum mode coupling theory, a maximum entropy analytic continuation approach, pair-product forward- backward semiclassical dynamics, and available experimental results. The self-diffusion constants are in excellent agreement with the experimental measurements for all systems studied. Furthermore, it is shown that the method is able to adequately describe both the single-particle and collective dynamics of quantum liquids. (c) 2004 American Institute of Physics

  11. Centroid-moment tensor solutions for October-December 2000

    NASA Astrophysics Data System (ADS)

    Dziewonski, A. M.; Ekström, G.; Maternovskaya, N. N.

    2003-04-01

    Centroid-moment tensor solutions are presented for 263 earthquakes that occurred during the fourth quarter of 2000. The solutions are obtained using corrections for a spherical earth structure represented by the whole mantle shear velocity model SH8/U4L8 of Dziewonski and Woodward [A.M. Dziewonski, R.L. Woodward, Acoustic imaging at the planetary scale, in: H. Emert, H.-P. Harjes (Eds.), Acoustical Imaging, Plenum Press, New York, vol. 19, 1992, pp. 785-797]. The model of an elastic attenuation of Durek and Ekström [Bull. Seism. Soc. Am. 86 (1996) 144] is used to predict the decay of the waveforms.

  12. Correlation Techniques as Applied to Pose Estimation in Space Station Docking

    NASA Technical Reports Server (NTRS)

    Rollins, J. Michael; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-01-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots essentially must form a constellation of specific relative positions in the incoming digital image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1I20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow, obscuration and lighting irregularity compensation are discussed.

  13. Instability analysis of charges trapped in the oxide of metal-ultra thin oxide-semiconductor structures

    NASA Astrophysics Data System (ADS)

    Aziz, A.; Kassmi, K.; Maimouni, R.; Olivié, F.; Sarrabayrouse, G.; Martinez, A.

    2005-09-01

    In this paper, we present the theoretical and experimental results of the influence of a charge trapped in ultra-thin oxide of metal/ultra-thin oxide/semiconductor structures (MOS) on the I(Vg) current-voltage characteristics when the conduction is of the Fowler-Nordheim (FN) tunneling type. The charge, which is negative, is trapped near the cathode (metal/oxide interface) after constant current injection by the metal (Vg<0). Of particular interest is the influence on the Δ Vg(Vg) shift over the whole I(Vg) characteristic at high field (greater than the injection field (>12.5 MV/cm)). It is shown that the charge centroid varies linearly with respect to the voltage Vg. The behavior at low field (<12.5 MV/cm) is analyzed in référence A. Aziz, K. Kassmi, Ka. Kassmi, F. Olivié, Semicond. Sci. Technol. 19, 877 (2004) and considers that the trapped charge centroid is fixed. The results obtained make it possible to analyze the influence of the injected charge and the applied field on the centroid position of the trapped charge, and to highlight the charge instability in the ultra-thin oxide of MOS structures.

  14. Evidence of Non-Coincidence between Radio and Optical Positions of ICRF Sources.

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.; da Silva, D. N.; Assafin, M.; Vieira Martins, R.

    2003-11-01

    Silva Neto et al. (SNAAVM: 2002) show that comparing the ICRF Ext1 sources standard radio position (Ma et al., 1998) against their optical counterpart position(ZZHJVW: Zacharias et al., 1999; USNO A2.0: Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9 +/- 1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio structure. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.

  15. Centroid stabilization in alignment of FOA corner cube: designing of a matched filter

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul; Wilhelmsen, Karl; Roberts, Randy; Leach, Richard; Miller Kamm, Victoria; Ngo, Tony; Lowe-Webb, Roger

    2015-02-01

    The current automation of image-based alignment of NIF high energy laser beams is providing the capability of executing multiple target shots per day. An important aspect of performing multiple shots in a day is to reduce additional time spent aligning specific beams due to perturbations in those beam images. One such alignment is beam centration through the second and third harmonic generating crystals in the final optics assembly (FOA), which employs two retro-reflecting corner cubes to represent the beam center. The FOA houses the frequency conversion crystals for third harmonic generation as the beams enters the target chamber. Beam-to-beam variations and systematic beam changes over time in the FOA corner-cube images can lead to a reduction in accuracy as well as increased convergence durations for the template based centroid detector. This work presents a systematic approach of maintaining FOA corner cube centroid templates so that stable position estimation is applied thereby leading to fast convergence of alignment control loops. In the matched filtering approach, a template is designed based on most recent images taken in the last 60 days. The results show that new filter reduces the divergence of the position estimation of FOA images.

  16. Ellipsoids for anomaly detection in remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Grosklos, Guenchik; Theiler, James

    2015-05-01

    For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.

  17. An X-ray method for detecting substructure in galaxy clusters - Application to Perseus, A2256, Centaurus, Coma, and Sersic 40/6

    NASA Technical Reports Server (NTRS)

    Mohr, Joseph J.; Fabricant, Daniel G.; Geller, Margaret J.

    1993-01-01

    We use the moments of the X-ray surface brightness distribution to constrain the dynamical state of a galaxy cluster. Using X-ray observations from the Einstein Observatory IPC, we measure the first moment FM, the ellipsoidal orientation angle, and the axial ratio at a sequence of radii in the cluster. We argue that a significant variation in the image centroid FM as a function of radius is evidence for a nonequilibrium feature in the intracluster medium (ICM) density distribution. In simple terms, centroid shifts indicate that the center of mass of the ICM varies with radius. This variation is a tracer of continuing dynamical evolution. For each cluster, we evaluate the significance of variations in the centroid of the IPC image by computing the same statistics on an ensemble of simulated cluster images. In producing these simulated images we include X-ray point source emission, telescope vignetting, Poisson noise, and characteristics of the IPC. Application of this new method to five Abell clusters reveals that the core of each one has significant substructure. In addition, we find significant variations in the orientation angle and the axial ratio for several of the clusters.

  18. Transition sum rules in the shell model

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Johnson, Calvin W.

    2018-03-01

    An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.

  19. Subverting the Hegemony of Risk: Vulnerability and Transformation among Australian Show Children

    ERIC Educational Resources Information Center

    Danaher, P. A.; Danaher, Geoff; Moriarty, Beverley

    2007-01-01

    Background: Australian show people traverse extensive coastal and inland circuits in eastern and northern Australia, bringing the delights of "sideshow alley" to annual agricultural shows. The show people's mobility for most of the school year makes it difficult for their school-age children to attend "regular" schools…

  20. Geometric quadratic stochastic operator on countable infinite set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganikhodjaev, Nasir; Hamzah, Nur Zatul Akmar

    2015-02-03

    In this paper we construct the family of Geometric quadratic stochastic operators defined on the countable sample space of nonnegative integers and investigate their trajectory behavior. Such operators can be reinterpreted in terms of of evolutionary operator of free population. We show that Geometric quadratic stochastic operators are regular transformations.

  1. The Influence of Surface Morphology and Diffraction Resolution of Canavalin Crystals

    NASA Technical Reports Server (NTRS)

    Plomp, M.; Thomas, B. R.; Day, J. S.; McPherson, A.; Chernov, A. A.; Malkin, A.

    2003-01-01

    Canavalin crystals grown from material purified and not purified by High Performance Liquid Chromatography were studied by atomic force microscopy and x-ray diffraction. After purification, resolution was improved from 2.55Angstroms to 2.22Angstroms and jagged isotropic spiral steps transformed into regular, well polygonized steps.

  2. Pituitary tumor-transforming gene 1 regulates the patterning of retinal mosaics

    PubMed Central

    Keeley, Patrick W.; Zhou, Cuiqi; Lu, Lu; Williams, Robert W.; Melmed, Shlomo; Reese, Benjamin E.

    2014-01-01

    Neurons are commonly organized as regular arrays within a structure, and their patterning is achieved by minimizing the proximity between like-type cells, but molecular mechanisms regulating this process have, until recently, been unexplored. We performed a forward genetic screen using recombinant inbred (RI) strains derived from two parental A/J and C57BL/6J mouse strains to identify genomic loci controlling spacing of cholinergic amacrine cells, which is a subclass of retinal interneuron. We found conspicuous variation in mosaic regularity across these strains and mapped a sizeable proportion of that variation to a locus on chromosome 11 that was subsequently validated with a chromosome substitution strain. Using a bioinformatics approach to narrow the list of potential candidate genes, we identified pituitary tumor-transforming gene 1 (Pttg1) as the most promising. Expression of Pttg1 was significantly different between the two parental strains and correlated with mosaic regularity across the RI strains. We identified a seven-nucleotide deletion in the Pttg1 promoter in the C57BL/6J mouse strain and confirmed a direct role for this motif in modulating Pttg1 expression. Analysis of Pttg1 KO mice revealed a reduction in the mosaic regularity of cholinergic amacrine cells, as well as horizontal cells, but not in two other retinal cell types. Together, these results implicate Pttg1 in the regulation of homotypic spacing between specific types of retinal neurons. The genetic variant identified creates a binding motif for the transcriptional activator protein 1 complex, which may be instrumental in driving differential expression of downstream processes that participate in neuronal spacing. PMID:24927528

  3. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  4. Advanced morphological analysis of patterns of thin anodic porous alumina

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toccafondi, C.; Istituto Italiano di Tecnologia, Department of Nanostructures, Via Morego 30, Genova I 16163; Stępniowski, W.J.

    2014-08-15

    Different conditions of fabrication of thin anodic porous alumina on glass substrates have been explored, obtaining two sets of samples with varying pore density and porosity, respectively. The patterns of pores have been imaged by high resolution scanning electron microscopy and analyzed by innovative methods. The regularity ratio has been extracted from radial profiles of the fast Fourier transforms of the images. Additionally, the Minkowski measures have been calculated. It was first observed that the regularity ratio averaged across all directions is properly corrected by the coefficient previously determined in the literature. Furthermore, the angularly averaged regularity ratio for themore » thin porous alumina made during short single-step anodizations is lower than that of hexagonal patterns of pores as for thick porous alumina from aluminum electropolishing and two-step anodization. Therefore, the regularity ratio represents a reliable measure of pattern order. At the same time, the lower angular spread of the regularity ratio shows that disordered porous alumina is more isotropic. Within each set, when changing either pore density or porosity, both regularity and isotropy remain rather constant, showing consistent fabrication quality of the experimental patterns. Minor deviations are tentatively discussed with the aid of the Minkowski measures, and the slight decrease in both regularity and isotropy for the final data-points of the porosity set is ascribed to excess pore opening and consequent pore merging. - Highlights: • Thin porous alumina is partly self-ordered and pattern analysis is required. • Regularity ratio is often misused: we fix the averaging and consider its spread. • We also apply the mathematical tool of Minkowski measures, new in this field. • Regularity ratio shows pattern isotropy and Minkowski helps in assessment. • General agreement with perfect artificial patterns confirms the good manufacturing.« less

  5. Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEG.

    PubMed

    Hauk, O; Keil, A; Elbert, T; Müller, M M

    2002-01-30

    We describe a methodology to apply current source density (CSD) and minimum norm (MN) estimation as pre-processing tools for time-series analysis of single trial EEG data. The performance of these methods is compared for the case of wavelet time-frequency analysis of simulated gamma-band activity. A reasonable comparison of CSD and MN on the single trial level requires regularization such that the corresponding transformed data sets have similar signal-to-noise ratios (SNRs). For region-of-interest approaches, it should be possible to optimize the SNR for single estimates rather than for the whole distributed solution. An effective implementation of the MN method is described. Simulated data sets were created by modulating the strengths of a radial and a tangential test dipole with wavelets in the frequency range of the gamma band, superimposed with simulated spatially uncorrelated noise. The MN and CSD transformed data sets as well as the average reference (AR) representation were subjected to wavelet frequency-domain analysis, and power spectra were mapped for relevant frequency bands. For both CSD and MN, the influence of noise can be sufficiently suppressed by regularization to yield meaningful information, but only MN represents both radial and tangential dipole sources appropriately as single peaks. Therefore, when relating wavelet power spectrum topographies to their neuronal generators, MN should be preferred.

  6. Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.

    PubMed

    Hamilton, Sarah J; Mueller, J L; Alsaker, M

    2017-02-01

    Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise.

  7. Electron-beam-charged dielectrics: Internal charge distribution

    NASA Technical Reports Server (NTRS)

    Beers, B. L.; Pine, V. W.

    1981-01-01

    Theoretical calculations of an electron transport model of the charging of dielectrics due to electron bombardment are compared to measurements of internal charge distributions. The emphasis is on the distribution of Teflon. The position of the charge centroid as a function of time is not monotonic. It first moves deeper into the material and then moves back near to the surface. In most time regimes of interest, the charge distribution is not unimodal, but instead has two peaks. The location of the centroid near saturation is a function of the incident current density. While the qualitative comparison of theory and experiment are reasonable, quantitative comparison shows discrepancies of as much as a factor of two.

  8. Ethyl 4,4''-difluoro-5'-meth-oxy-1,1':3',1''-terphenyl-4'-carboxyl-ate.

    PubMed

    Fun, Hoong-Kun; Chia, Tze Shyang; Samshuddin, S; Narayana, B; Sarojini, B K

    2012-01-01

    In the title compound, C(22)H(18)F(2)O(3), the two fluoro-substituted rings form dihedral angles of 25.89 (15) and 55.00 (12)° with the central benzene ring. The eth-oxy group in the mol-ecule is disordered over two positions with a site-occupancy ratio of 0.662 (7):0.338 (7). In the crystal, mol-ecules are linked by C-H⋯O hydrogen bonds into chains along the a axis. The crystal packing is further stabilized by C-H⋯π and π-π inter-actions, with centroid-centroid distances of 3.8605 (15) Å.

  9. (S)-N-[1-(5-Benzyl-sulfan-yl-1,3,4-oxa-diazol-2-yl)-2-phenyl-eth-yl]-4-methyl-benzene-sulfonamide.

    PubMed

    Syed, Tayyaba; Hameed, Shahid; Jones, Peter G

    2011-11-01

    The title compound, C(24)H(23)N(3)O(3)S(2), crystallizes with two independent mol-ecules in the asymmetric unit. They differ essentially in the orientation of the tolyl rings, between which there is π-π stacking (centroid-centroid distance = 3.01 Å). The absolute configuration was confirmed by the determination of the Flack parameter [x = 0.008 (9)]. In the crystal, mol-ecules are connected by two classical N-H⋯N hydrogen bonds and two weak but very short C-H⋯O(sulfon-yl) inter-actions, forming layers lying parallel to the bc plane.

  10. Centroid-moment tensor solutions for January-March, 2000

    NASA Astrophysics Data System (ADS)

    Dziewonski, A. M.; Ekström, G.; Maternovskaya, N. N.

    2000-10-01

    Centroid-moment tensor solutions are presented for 250 earthquakes that occurred during the first quarter of 2000. The solutions are obtained using corrections for aspherical earth structure represented by a whole mantle shear velocity model SH8/U4L8 of Dziewonski and Woodward [Dziewonski, A.M., Woodward, R.L., 1992. Acoustic imaging at the planetary scale. In: Emert, H., Harjes, H.-P. (Eds.), Acoustical Imaging. Plenum Press, Reading MA, Vol. 19, pp. 785-797]. A model of an elastic attenuation of Durek and Ekström [Durek, J.J., Ekström, G., 1996. Bull. Seism. Soc. Am. 86, 144-158] is used to predict the decay of the waveforms.

  11. Adaptive fuzzy leader clustering of complex data sets in pattern recognition

    NASA Technical Reports Server (NTRS)

    Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda

    1992-01-01

    A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.

  12. 3-Nitro-phenol-1,3,5-triazine-2,4,6-tri-amine (2/1).

    PubMed

    Sangeetha, V; Kanagathara, N; Chakkaravarthi, G; Marchewka, M K; Anbalagan, G

    2013-06-01

    The asymmetric unit of the title compound, C3H6N6·2C6H5NO3, contains one melamine and two 3-nitro-phenol mol-ecules. The mean planes of the 3-nitro-phenol mol-ecules are almost orthogonal to the plane of melamine, making dihedral angles of 82.77 (4) and 88.36 (5)°. In the crystal, mol-ecules are linked via O-H⋯N, N-H⋯N and N-H⋯O hydrogen bonds, forming a three-dimensional network. The crystal also features weak C-H⋯π and π-π inter-actions [centroid-centroid distance = 3.9823 (9) Å].

  13. Wedge-and-strip anodes for centroid-finding position-sensitive photon and particle detectors

    NASA Technical Reports Server (NTRS)

    Martin, C.; Jelinsky, P.; Lampton, M.; Malina, R. F.

    1981-01-01

    The paper examines geometries employing position-dependent charge partitioning to obtain a two-dimensional position signal from each detected photon or particle. Requiring three or four anode electrodes and signal paths, images have little distortion and resolution is not limited by thermal noise. An analysis of the geometrical image nonlinearity between event centroid location and the charge partition ratios is presented. In addition, fabrication and testing of two wedge-and-strip anode systems are discussed. Images obtained with EUV radiation and microchannel plates verify the predicted performance, with further resolution improvements achieved by adopting low noise signal circuitry. Also discussed are the designs of practical X-ray, EUV, and charged particle image systems.

  14. Keeping trees as assets

    Treesearch

    Kevin T. Smith

    2009-01-01

    Landscape trees have real value and contribute to making livable communities. Making the most of that value requires providing trees with the proper care and attention. As potentially large and long-lived organisms, trees benefit from commitment to regular care that respects the natural tree system. This system captures, transforms, and uses energy to survive, grow,...

  15. Scaffolding Teachers to Foster Inclusive Pedagogy and Presence through Collaborative Action Research

    ERIC Educational Resources Information Center

    Juma, Said; Lehtomäki, Elina; Naukkarinen, Aimo

    2017-01-01

    Teachers can be influential change agents in transforming their schools if they regularly reflect on their pedagogical practices, looking for improvements that will help all learners reach their full potential. However, in many sub-Saharan African countries, teachers seldom get an opportunity to collaboratively reflect on their practices. Action…

  16. Iterative image reconstruction that includes a total variation regularization for radial MRI.

    PubMed

    Kojima, Shinya; Shinohara, Hiroyuki; Hashimoto, Takeyuki; Hirata, Masami; Ueno, Eiko

    2015-07-01

    This paper presents an iterative image reconstruction method for radial encodings in MRI based on a total variation (TV) regularization. The algebraic reconstruction method combined with total variation regularization (ART_TV) is implemented with a regularization parameter specifying the weight of the TV term in the optimization process. We used numerical simulations of a Shepp-Logan phantom, as well as experimental imaging of a phantom that included a rectangular-wave chart, to evaluate the performance of ART_TV, and to compare it with that of the Fourier transform (FT) method. The trade-off between spatial resolution and signal-to-noise ratio (SNR) was investigated for different values of the regularization parameter by experiments on a phantom and a commercially available MRI system. ART_TV was inferior to the FT with respect to the evaluation of the modulation transfer function (MTF), especially at high frequencies; however, it outperformed the FT with regard to the SNR. In accordance with the results of SNR measurement, visual impression suggested that the image quality of ART_TV was better than that of the FT for reconstruction of a noisy image of a kiwi fruit. In conclusion, ART_TV provides radial MRI with improved image quality for low-SNR data; however, the regularization parameter in ART_TV is a critical factor for obtaining improvement over the FT.

  17. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  18. Using a Smartphone Camera for Nanosatellite Attitude Determination

    NASA Astrophysics Data System (ADS)

    Shimmin, R.

    2014-09-01

    The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.

  19. A Parkinson's disease measurement system using laser lines and a CMOS image sensor.

    PubMed

    Chang, Rong-Seng; Chiu, Jen-Hwey; Chen, Fang-Pey; Chen, Jyh-Cheng; Yang, Jen-Lin

    2011-01-01

    This paper presents a non-invasive, non-contact system for the measurement of the arterial dorsum manus vibration waveforms of Parkinson disease patients. The laser line method is applied to detect the dorsum manus vibration in rest and postural situations. The proposed measurement system mainly consists of a laser diode and a low cost complementary metal-oxide semiconductor (CMOS) image sensor. Laser line and centroid methods are combined with the Fast Fourier Transform (FFT) in this study. The shape and frequency and relative frequency of the dorsum manus vibration waveforms can be detected rapidly using our Parkinson's disease measurement system. A laser line near the wrist joint is used as the testing line. The experimental results show an obvious increase in the amplitude and frequency of dorsum manus variation in the measured region in patients suffering from Parkinson's disease, indicating the obvious effects of the disease. Both in postural and rest state measurements, as the patient disease age increases the vibration frequency increases. The measurement system is well suited for evaluating and pre-diagnosing early stage Parkinson's disease.

  20. Regularized Transformation-Optics Cloaking for the Helmholtz Equation: From Partial Cloak to Full Cloak

    NASA Astrophysics Data System (ADS)

    Li, Jingzhi; Liu, Hongyu; Rondi, Luca; Uhlmann, Gunther

    2015-04-01

    We develop a very general theory on the regularized approximate invisibility cloaking for the wave scattering governed by the Helmholtz equation in any space dimensions via the approach of transformation optics. There are four major ingredients in our proposed theory: (1) The non-singular cloaking medium is obtained by the push-forwarding construction through a transformation that blows up a subset in the virtual space, where is an asymptotic regularization parameter. will degenerate to K 0 as , and in our theory K 0 could be any convex compact set in , or any set whose boundary consists of Lipschitz hypersurfaces, or a finite combination of those sets. (2) A general lossy layer with the material parameters satisfying certain compatibility integral conditions is employed right between the cloaked and cloaking regions. (3) The contents being cloaked could also be extremely general, possibly including, at the same time, generic mediums and, sound-soft, sound-hard and impedance-type obstacles, as well as some sources or sinks. (4) In order to achieve a cloaking device of compact size, particularly for the case when is not "uniformly small", an assembly-by-components, the (ABC) geometry is developed for both the virtual and physical spaces and the blow-up construction is based on concatenating different components. Within the proposed framework, we show that the scattered wave field corresponding to a cloaking problem will converge to u 0 as , with u 0 being the scattered wave field corresponding to a sound-hard K 0. The convergence result is used to theoretically justify the approximate full and partial invisibility cloaks, depending on the geometry of K 0. On the other hand, the convergence results are conducted in a much more general setting than what is needed for the invisibility cloaking, so they are of significant mathematical interest for their own sake. As for applications, we construct three types of full and partial cloaks. Some numerical experiments are also conducted to illustrate our theoretical results.

  1. Transition sum rules in the shell model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yi; Johnson, Calvin W.

    An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy- weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, in the case of the EWSR a double commutator. While most prior applications of the double-commutator have been to special cases, we derive general formulas for matrix elements of bothmore » operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We then apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E1) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground state electric quadrupole (E2) centroids in the $sd$-shell.« less

  2. Hyperuniformity Length in Experimental Foam and Simulated Point Patterns

    NASA Astrophysics Data System (ADS)

    Chieco, Anthony; Roth, Adam; Dreyfus, Remi; Torquato, Salvatore; Durian, Douglas

    2015-03-01

    Systems without long-wavelength number density fluctuations are called hyperuniform (HU). The degree to which a point pattern is HU may be tested in terms of the variance in the number of points inside randomly placed boxes of side length L. If HU then the variance is due solely to fluctuations near the boundary rather than throughout the entire volume of the box. To make this concrete we introduce a hyperuniformity length h, equal to the width of the boundary where number fluctuations occur. Thus h helps characterize the disorder. We show how to deduce h from the number variance, and we do so for Poisson and Einstein patterns plus those made by the vertices and bubble centroids in 2d foams. A Poisson pattern is one where points are totally random. These are not HU and h equals L/2. We coin ``Einstein patterns'' to be where points in a lattice are independently displaced from their site by a normally distributed amount. These are HU and h equals the RMS displacement from the lattice sites. Bubble centroids and vertices are both HU. For these, h is less than L/2 and increases slower than linear in L. The centroids are more HU than the vertices, in that h that increases more slowly.

  3. Hydraulic characteristics of low-impact development practices in northeastern Ohio, 2008–2010

    USGS Publications Warehouse

    Darner, Robert A.; Dumouchelle, Denise H.

    2011-01-01

    Low-impact development (LID) is an approach to managing stormwater as near to its source as possible; this is accomplished by minimizing impervious surfaces and promoting more natural infiltration and evapotranspiration than is typically associated with developed areas. Two newly constructed LID sites in northeastern Ohio were studied to document their hydraulic characteristics. A roadside best-management practice (BMP) was constructed by replacing about 1,400 linear feet of existing ditches with a bioswale/rain garden BMP consisting of a grassed swale interspersed with rain-garden/overflow structures. The site was monitored in 2008, 2009, and 2010. Although some overflows occurred, numerous precipitation events exceeding the 0.75-inch design storm did not result in overflows. A second study site consists of an 8,200-square-foot parking lot made of a pervious pavers and a rain garden that receives runoff from the roof of a nearby commercial building. A comparison of data from 2009 and 2010 indicates that the median runoff volume in 2010 decreased relative to 2009. The centroid lag times (time difference between centroid of precipitation and centroid of flow) decreased in 2010, most likely due to more intense, shorter duration precipitation events and maturation of the rain garden. Additional data could help quantify the relation between meteorological variables and BMP efficiency.

  4. Candidate soil indicators for monitoring the progress of constructed wetlands toward a natural state: a statistical approach

    USGS Publications Warehouse

    Stapanian, Martin A.; Adams, Jean V.; Fennessy, M. Siobhan; Mack, John; Micacchion, Mick

    2013-01-01

    A persistent question among ecologists and environmental managers is whether constructed wetlands are structurally or functionally equivalent to naturally occurring wetlands. We examined 19 variables collected from 10 constructed and nine natural emergent wetlands in Ohio, USA. Our primary objective was to identify candidate indicators of wetland class (natural or constructed), based on measurements of soil properties and an index of vegetation integrity, that can be used to track the progress of constructed wetlands toward a natural state. The method of nearest shrunken centroids was used to find a subset of variables that would serve as the best classifiers of wetland class, and error rate was calculated using a five-fold cross-validation procedure. The shrunken differences of percent total organic carbon (% TOC) and percent dry weight of the soil exhibited the greatest distances from the overall centroid. Classification based on these two variables yielded a misclassification rate of 11% based on cross-validation. Our results indicate that % TOC and percent dry weight can be used as candidate indicators of the status of emergent, constructed wetlands in Ohio and for assessing the performance of mitigation. The method of nearest shrunken centroids has excellent potential for further applications in ecology.

  5. Propagation of an Airy beam through the atmosphere.

    PubMed

    Ji, Xiaoling; Eyyuboğlu, Halil T; Ji, Guangming; Jia, Xinhong

    2013-01-28

    In this paper, the effect of thermal blooming of an Airy beam propagating through the atmosphere is examined, and the effect of atmospheric turbulence is not considered. The changes of the intensity distribution, the centroid position and the mean-squared beam width of an Airy beam propagating through the atmosphere are studied by using the four-dimensional (4D) computer code of the time-dependent propagation of Airy beams through the atmosphere. It is shown that an Airy beam can't retain its shape and the structure when the Airy beam propagates through the atmosphere due to thermal blooming except for the short propagation distance, or the short time, or the low beam power. The thermal blooming results in a central dip of the center lobe, and causes the center lobe to spread and decrease. In contrast with the center lobe, the side lobes are less affected by thermal blooming, such that the intensity maximum of the side lobe may be larger than that of the center lobe. However, the cross wind can reduce the effect of thermal blooming. When there exists the cross wind velocity vx in x direction, the dependence of centroid position in x direction on vx is not monotonic, and there exists a minimum, but the centroid position in y direction is nearly independent of vx.

  6. Are judgments a form of data clustering? Reexamining contrast effects with the k-means algorithm.

    PubMed

    Boillaud, Eric; Molina, Guylaine

    2015-04-01

    A number of theories have been proposed to explain in precise mathematical terms how statistical parameters and sequential properties of stimulus distributions affect category ratings. Various contextual factors such as the mean, the midrange, and the median of the stimuli; the stimulus range; the percentile rank of each stimulus; and the order of appearance have been assumed to influence judgmental contrast. A data clustering reinterpretation of judgmental relativity is offered wherein the influence of the initial choice of centroids on judgmental contrast involves 2 combined frequency and consistency tendencies. Accounts of the k-means algorithm are provided, showing good agreement with effects observed on multiple distribution shapes and with a variety of interaction effects relating to the number of stimuli, the number of response categories, and the method of skewing. Experiment 1 demonstrates that centroid initialization accounts for contrast effects obtained with stretched distributions. Experiment 2 demonstrates that the iterative convergence inherent to the k-means algorithm accounts for the contrast reduction observed across repeated blocks of trials. The concept of within-cluster variance minimization is discussed, as is the applicability of a backward k-means calculation method for inferring, from empirical data, the values of the centroids that would serve as a representation of the judgmental context. (c) 2015 APA, all rights reserved.

  7. A Standard Law for the Equatorward Drift of the Sunspot Zones

    NASA Technical Reports Server (NTRS)

    Hathaway, David H.

    2012-01-01

    The latitudinal location of the sunspot zones in each hemisphere is determined by calculating the centroid position of sunspot areas for each solar rotation from May 1874 to June 2012. When these centroid positions are plotted and analyzed as functions of time from each sunspot cycle maximum there appears to be systematic differences in the positions and equatorward drift rates as a function of sunspot cycle amplitude. If, instead, these centroid positions are plotted and analyzed as functions of time from each sunspot cycle minimum then most of the differences in the positions and equatorward drift rates disappear. The differences that remain disappear entirely if curve fitting is used to determine the starting times (which vary by as much as 8 months from the times of minima). The sunspot zone latitudes and equatorward drift measured relative to this starting time follow a standard path for all cycles with no dependence upon cycle strength or hemispheric dominance. Although Cycle 23 was peculiar in its length and the strength of the polar fields it produced, it too shows no significant variation from this standard. This standard law, and the lack of variation with sunspot cycle characteristics, is consistent with Dynamo Wave mechanisms but not consistent with current Flux Transport Dynamo models for the equatorward drift of the sunspot zones.

  8. Transition sum rules in the shell model

    DOE PAGES

    Lu, Yi; Johnson, Calvin W.

    2018-03-29

    An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy- weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, in the case of the EWSR a double commutator. While most prior applications of the double-commutator have been to special cases, we derive general formulas for matrix elements of bothmore » operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We then apply this simple tool to a number of nuclides, and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E1) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground state electric quadrupole (E2) centroids in the $sd$-shell.« less

  9. Precise locating approach of the beacon based on gray gradient segmentation interpolation in satellite optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan

    2017-03-01

    Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.

  10. The Algorithm for MODIS Wavelength On-Orbit Calibration Using the SRCA

    NASA Technical Reports Server (NTRS)

    Montgomery, Harry; Che, Nianzeng; Parker, Kirsten; Bowser, Jeff

    1998-01-01

    The Spectro-Radiometric Calibration Assembly (SRCA) provides on-orbit spectral calibration of the MODerate resolution Imaging Spectroradiometer (MODIS) reflected solar bands and this paper describes how it is accomplished. The SRCA has two adjacent exit slits: 1) Main slit and 2) Calibration slit. The output from the main slit is measured by a reference silicon photo-diode (SIPD) and then passes through the MODIS. The output from the calibration slit passes through a piece of didymium transmission glass and then it is measured by a calibration SIPD. The centroids of the sharp spectral peaks of a didymium glass are utilized as wavelength standards. After normalization using the reference SIPD signal to eliminate the effects of the illuminating source spectra, the calibration SIPD establishes the relationship between the peaks of the didymium spectra and the grating angle; this is accomplished through the grating equation. In the grating equation the monochromator parameters, Beta (half angle between the incident and diffractive beams) and Theta(sub off) (offset angle of the grating motor) are determined by matching, in a least square sense, the known centroid wavelengths of the didymium peaks and the calculated centroid grating angles from the calibration SIPD signals for the peaks. A displacement between the calibration SIPD and the reference SIPD complicates the signal processing.

  11. Tracking of Maneuvering Complex Extended Object with Coupled Motion Kinematics and Extension Dynamics Using Range Extent Measurements

    PubMed Central

    Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin

    2017-01-01

    The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629

  12. Use of the wavelet transform to investigate differences in brain PET images between patient groups

    NASA Astrophysics Data System (ADS)

    Ruttimann, Urs E.; Unser, Michael A.; Rio, Daniel E.; Rawlings, Robert R.

    1993-06-01

    Suitability of the wavelet transform was studied for the analysis of glucose utilization differences between subject groups as displayed in PET images. To strengthen statistical inference, it was of particular interest investigating the tradeoff between signal localization and image decomposition into uncorrelated components. This tradeoff is shown to be controlled by wavelet regularity, with the optimal compromise attained by third-order orthogonal spline wavelets. Testing of the ensuing wavelet coefficients identified only about 1.5% as statistically different (p < .05) from noise, which then served to resynthesize the difference images by the inverse wavelet transform. The resulting images displayed relatively uniform, noise-free regions of significant differences with, due to the good localization maintained by the wavelets, very little reconstruction artifacts.

  13. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  14. Contraction of high eccentricity satellite orbits using uniformly regular KS canonical elements with oblate diurnally varying atmosphere.

    NASA Astrophysics Data System (ADS)

    Raj, Xavier James

    2016-07-01

    Accurate orbit prediction of an artificial satellite under the influence of air drag is one of the most difficult and untraceable problem in orbital dynamics. The orbital decay of these satellites is mainly controlled by the atmospheric drag effects. The effects of the atmosphere are difficult to determine, since the atmospheric density undergoes large fluctuations. The classical Newtonian equations of motion, which is non linear is not suitable for long-term integration. Many transformations have emerged in the literature to stabilize the equations of motion either to reduce the accumulation of local numerical errors or allowing the use of large integration step sizes, or both in the transformed space. One such transformation is known as KS transformation by Kustaanheimo and Stiefel, who regularized the nonlinear Kepler equations of motion and reduced it into linear differential equations of a harmonic oscillator of constant frequency. The method of KS total energy element equations has been found to be a very powerful method for obtaining numerical as well as analytical solution with respect to any type of perturbing forces, as the equations are less sensitive to round off and truncation errors. The uniformly regular KS canonical equations are a particular canonical form of the KS differential equations, where all the ten KS Canonical elements αi and βi are constant for unperturbed motion. These equations permit the uniform formulation of the basic laws of elliptic, parabolic and hyperbolic motion. Using these equations, developed analytical solution for short term orbit predictions with respect to Earth's zonal harmonic terms J2, J3, J4. Further, these equations were utilized to include the canonical forces and analytical theories with air drag were developed for low eccentricity orbits (e < 0.2) with different atmospheric models. Using uniformly regular KS canonical elements developed analytical theory for high eccentricity (e > 0.2) orbits by assuming the atmosphere to be oblate only. In this paper a new non-singular analytical theory is developed for the motion of high eccentricity satellite orbits with oblate diurnally varying atmosphere in terms of the uniformly regular KS canonical elements. The analytical solutions are generated up to fourth-order terms using a new independent variable and c (a small parameter dependent on the flattening of the atmosphere). Due to symmetry, only two of the nine equations need to be solved analytically to compute the state vector and change in energy at the end of each revolution. The theory is developed on the assumption that density is constant on the surfaces of spheroids of fixed ellipticity ɛ (equal to the Earth's ellipticity, 0.00335) whose axes coincide with the Earth's axis. Numerical experimentation with the analytical solution for a wide range of perigee height, eccentricity, and orbital inclination has been carried out up to 100 revolutions. Comparisons are made with numerically integrated values and found that they match quite well. Effectiveness of the present analytical solutions will be demonstrated by comparing the results with other analytical solutions in the literature.

  15. Creating virtual electrodes with 2D current steering

    NASA Astrophysics Data System (ADS)

    Spencer, Thomas C.; Fallon, James B.; Shivdasani, Mohit N.

    2018-06-01

    Objective. Current steering techniques have shown promise in retinal prostheses as a way to increase the number of distinct percepts elicitable without increasing the number of implanted electrodes. Previously, it has been shown that ‘virtual’ electrodes can be created between simultaneously stimulated electrode pairs, producing unique cortical response patterns. This study investigated whether virtual electrodes could be created using 2D current steering, and whether these virtual electrodes can produce cortical responses with predictable spatial characteristics. Approach. Normally-sighted eyes of seven adult anaesthetised cats were implanted with a 42-channel electrode array in the suprachoroidal space and multi-unit neural activity was recorded from the visual cortex. Stimuli were delivered to individual physical electrodes, or electrodes grouped into triangular, rectangular, and hexagonal arrangements. Varying proportions of charge were applied to each electrode in a group to ‘steer’ current and create virtual electrodes. The centroids of cortical responses to stimulation of virtual electrodes were compared to those evoked by stimulation of single physical electrodes. Main results. Responses to stimulation of groups of up to six electrodes with equal ratios of charge on each electrode resulted in cortical activation patterns that were similar to those elicited by the central physical electrode (centroids: RM ANOVA on ranks, p  >  0.05 neural spread: one-way ANOVA on Ranks, p  >  0.05). We were also able to steer the centroid of activation towards the direction of any of the electrodes of the group by applying a greater charge to that electrode, but the movement in the centroid was not found to be significant. Significance. The results suggest that current steering is possible in two dimensions between up to at least six electrodes, indicating it may be possible to increase the number of percepts in patients without increasing the number of physical electrodes. Being able to reproduce spatial characteristics of responses to individual physical electrodes suggests that this technique could also be used to compensate for faulty electrodes.

  16. Relationship between ion pair geometries and electrostatic strengths in proteins.

    PubMed Central

    Kumar, Sandeep; Nussinov, Ruth

    2002-01-01

    The electrostatic free energy contribution of an ion pair in a protein depends on two factors, geometrical orientation of the side-chain charged groups with respect to each other and the structural context of the ion pair in the protein. Conformers in NMR ensembles enable studies of the relationship between geometry and electrostatic strengths of ion pairs, because the protein structural contexts are highly similar across different conformers. We have studied this relationship using a dataset of 22 unique ion pairs in 14 NMR conformer ensembles for 11 nonhomologous proteins. In different NMR conformers, the ion pairs are classified as salt bridges, nitrogen-oxygen (N-O) bridges and longer-range ion pairs on the basis of geometrical criteria. In salt bridges, centroids of the side-chain charged groups and at least a pair of side-chain nitrogen and oxygen atoms of the ion-pairing residues are within a 4 A distance. In N-O bridges, at least a pair of the side-chain nitrogen and oxygen atoms of the ion-pairing residues are within 4 A distance, but the distance between the side-chain charged group centroids is greater than 4 A. In the longer-range ion pairs, the side-chain charged group centroids as well as the side-chain nitrogen and oxygen atoms are more than 4 A apart. Continuum electrostatic calculations indicate that most of the ion pairs have stabilizing electrostatic contributions when their side-chain charged group centroids are within 5 A distance. Hence, most (approximately 92%) of the salt bridges and a majority (68%) of the N-O bridges are stabilizing. Most (approximately 89%) of the destabilizing ion pairs are the longer-range ion pairs. In the NMR conformer ensembles, the electrostatic interaction between side-chain charged groups of the ion-pairing residues is the strongest for salt bridges, considerably weaker for N-O bridges, and the weakest for longer-range ion pairs. These results suggest empirical rules for stabilizing electrostatic interactions in proteins. PMID:12202384

  17. Source Parameter Inversion for Recent Great Earthquakes from a Decade-long Observation of Global Gravity Fields

    NASA Technical Reports Server (NTRS)

    Han, Shin-Chan; Riva, Ricccardo; Sauber, Jeanne; Okal, Emile

    2013-01-01

    We quantify gravity changes after great earthquakes present within the 10 year long time series of monthly Gravity Recovery and Climate Experiment (GRACE) gravity fields. Using spherical harmonic normal-mode formulation, the respective source parameters of moment tensor and double-couple were estimated. For the 2004 Sumatra-Andaman earthquake, the gravity data indicate a composite moment of 1.2x10(exp 23)Nm with a dip of 10deg, in agreement with the estimate obtained at ultralong seismic periods. For the 2010 Maule earthquake, the GRACE solutions range from 2.0 to 2.7x10(exp 22)Nm for dips of 12deg-24deg and centroid depths within the lower crust. For the 2011 Tohoku-Oki earthquake, the estimated scalar moments range from 4.1 to 6.1x10(exp 22)Nm, with dips of 9deg-19deg and centroid depths within the lower crust. For the 2012 Indian Ocean strike-slip earthquakes, the gravity data delineate a composite moment of 1.9x10(exp 22)Nm regardless of the centroid depth, comparing favorably with the total moment of the main ruptures and aftershocks. The smallest event we successfully analyzed with GRACE was the 2007 Bengkulu earthquake with M(sub 0) approx. 5.0x10(exp 21)Nm. We found that the gravity data constrain the focal mechanism with the centroid only within the upper and lower crustal layers for thrust events. Deeper sources (i.e., in the upper mantle) could not reproduce the gravity observation as the larger rigidity and bulk modulus at mantle depths inhibit the interior from changing its volume, thus reducing the negative gravity component. Focal mechanisms and seismic moments obtained in this study represent the behavior of the sources on temporal and spatial scales exceeding the seismic and geodetic spectrum.

  18. Niche tracking and rapid establishment of distributional equilibrium in the house sparrow show potential responsiveness of species to climate change.

    PubMed

    Monahan, William B; Tingley, Morgan W

    2012-01-01

    The ability of species to respond to novel future climates is determined in part by their physiological capacity to tolerate climate change and the degree to which they have reached and continue to maintain distributional equilibrium with the environment. While broad-scale correlative climatic measurements of a species' niche are often described as estimating the fundamental niche, it is unclear how well these occupied portions actually approximate the fundamental niche per se, versus the fundamental niche that exists in environmental space, and what fitness values bounding the niche are necessary to maintain distributional equilibrium. Here, we investigate these questions by comparing physiological and correlative estimates of the thermal niche in the introduced North American house sparrow (Passer domesticus). Our results indicate that occupied portions of the fundamental niche derived from temperature correlations closely approximate the centroid of the existing fundamental niche calculated on a fitness threshold of 50% population mortality. Using these niche measures, a 75-year time series analysis (1930-2004) further shows that: (i) existing fundamental and occupied niche centroids did not undergo directional change, (ii) interannual changes in the two niche centroids were correlated, (iii) temperatures in North America moved through niche space in a net centripetal fashion, and consequently, (iv) most areas throughout the range of the house sparrow tracked the existing fundamental niche centroid with respect to at least one temperature gradient. Following introduction to a new continent, the house sparrow rapidly tracked its thermal niche and established continent-wide distributional equilibrium with respect to major temperature gradients. These dynamics were mediated in large part by the species' broad thermal physiological tolerances, high dispersal potential, competitive advantage in human-dominated landscapes, and climatically induced changes to the realized environmental space. Such insights may be used to conceptualize mechanistic climatic niche models in birds and other taxa.

  19. Unified law of evolution of experimental gouge-filled fault for fast and slow slip events at slider frictional experiments

    NASA Astrophysics Data System (ADS)

    Ostapchuk, Alexey; Saltykov, Nikolay

    2017-04-01

    Excessive tectonic stresses accumulated in the area of rock discontinuity are released while a process of slip along preexisting faults. Spectrum of slip modes includes not only creeps and regular earthquakes but also some transitional regimes - slow-slip events, low-frequency and very low-frequency earthquakes. However, there is still no agreement in Geophysics community if such fast and slow events have mutual nature [Peng, Gomberg, 2010] or they present different physical phenomena [Ide et al., 2007]. Models of nucleation and evolution of fault slip events could be evolved by laboratory experiments in which regularities of shear deformation of gouge-filled fault are investigated. In the course of the work we studied deformation regularities of experimental fault by slider frictional experiments for development of unified law of evolution of fault and revelation of its parameters responsible for deformation mode realization. The experiments were conducted as a classic slider-model experiment, in which block under normal and shear stresses moves along interface. The volume between two rough surfaces was filled by thin layer of granular matter. Shear force was applied by a spring which deformed with a constant rate. In such experiments elastic energy was accumulated in the spring, and regularities of its releases were determined by regularities of frictional behaviour of experimental fault. A full spectrum of slip modes was simulated in laboratory experiments. Slight change of gouge characteristics (granule shape, content of clay), viscosity of interstitial fluid and level of normal stress make it possible to obtained gradual transformation of the slip modes from steady sliding and slow slip to regular stick-slip, with various amplitude of 'coseismic' displacement. Using method of asymptotic analogies we have shown that different slip modes can be specified in term of single formalism and preparation of different slip modes have uniform evolution law. It is shown that shear stiffness of experimental fault is the parameter, which control realization of certain slip modes. It is worth to be mentioned that different serious of transformation is characterized by functional dependences, which have general view and differ only in normalization factors. Findings authenticate that slow and fast slip events have mutual nature. Determination of fault stiffness and testing of fault gouge allow to estimate intensity of seismic events. The reported study was funded by RFBR according to the research project № 16-05-00694.

  20. Localization accuracy from automatic and semi-automatic rigid registration of locally-advanced lung cancer targets during image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2012-01-15

    Purpose: To evaluate localization accuracy resulting from rigid registration of locally-advanced lung cancer targets using fully automatic and semi-automatic protocols for image-guided radiation therapy. Methods: Seventeen lung cancer patients, fourteen also presenting with involved lymph nodes, received computed tomography (CT) scans once per week throughout treatment under active breathing control. A physician contoured both lung and lymph node targets for all weekly scans. Various automatic and semi-automatic rigid registration techniques were then performed for both individual and simultaneous alignments of the primary gross tumor volume (GTV{sub P}) and involved lymph nodes (GTV{sub LN}) to simulate the localization process in image-guidedmore » radiation therapy. Techniques included ''standard'' (direct registration of weekly images to a planning CT), ''seeded'' (manual prealignment of targets to guide standard registration), ''transitive-based'' (alignment of pretreatment and planning CTs through one or more intermediate images), and ''rereferenced'' (designation of a new reference image for registration). Localization error (LE) was assessed as the residual centroid and border distances between targets from planning and weekly CTs after registration. Results: Initial bony alignment resulted in centroid LE of 7.3 {+-} 5.4 mm and 5.4 {+-} 3.4 mm for the GTV{sub P} and GTV{sub LN}, respectively. Compared to bony alignment, transitive-based and seeded registrations significantly reduced GTV{sub P} centroid LE to 4.7 {+-} 3.7 mm (p = 0.011) and 4.3 {+-} 2.5 mm (p < 1 x 10{sup -3}), respectively, but the smallest GTV{sub P} LE of 2.4 {+-} 2.1 mm was provided by rereferenced registration (p < 1 x 10{sup -6}). Standard registration significantly reduced GTV{sub LN} centroid LE to 3.2 {+-} 2.5 mm (p < 1 x 10{sup -3}) compared to bony alignment, with little additional gain offered by the other registration techniques. For simultaneous target alignment, centroid LE as low as 3.9 {+-} 2.7 mm and 3.8 {+-} 2.3 mm were achieved for the GTV{sub P} and GTV{sub LN}, respectively, using rereferenced registration. Conclusions: Target shape, volume, and configuration changes during radiation therapy limited the accuracy of standard rigid registration for image-guided localization in locally-advanced lung cancer. Significant error reductions were possible using other rigid registration techniques, with LE approaching the lower limit imposed by interfraction target variability throughout treatment.« less

  1. Effects of coordinate system choice on measured regional myocardial function in short-axis cine electron-beam tomography

    NASA Astrophysics Data System (ADS)

    Reed, Judd E.; Rumberger, John A.; Buithieu, Jean; Behrenbeck, Thomas; Breen, Jerome F.; Sheedy, Patrick F., II

    1995-05-01

    Following myocardial infarction, the size of the infarcted region and the systolic functioning of the noninfarcted region are commonly assessed by various cross- sectional imaging techniques. A series of images representing successive phases of the cardiac cycle can be acquired by several imaging modalities including electron beam computed tomography, magnetic resonance imaging, and echocardiography. For the assessment of patterns of ventricular contraction, images are commonly acquired of ventricular cross-sections normal to the 'long' axis of the heart and parallel to the mitral valve plane. The endocardial and epicardial surfaces of the myocardium are identified. Then the ventricle is divided into sectors and the volumes of blood and myocardium within each sector at multiple phases of the cardiac cycle are measured. Regional function parameters are derived from these measurements. This generally mandates the use of a polar or cylindrical coordinate system. Various algorithms have been used to select the origin of this coordinate system. These include the centroid of the endocardial surface, the epicardial surface, or of a polygon whose vertices lie midway between the epicardial and endocardial surfaces of the myocardium (centerline method). Another algorithm has been developed in our laboratory. This uses the centroid (or center of mass) of the myocardium exclusive of the ventricular cavity. Each of these choices for origin of coordinate system can be derived from the end- diastolic image or from the end-systolic image. Alternately, new coordinate systems can be selected for each phase of the cardiac cycle. These are referred to as 'floating' coordinate systems. A series of computer models have been developed in our laboratory to study the effects of each of these choices on the regional function parameters of normal ventricles and how these choices effect the quantification of regional abnormalities after myocardial infarction. The most sophisticated of these is an interactive program with a graphical user interface which facilitates the simulation of a wide variety of dynamic ventricular cross sections. Analysis of these simulations has led to a better understanding of how polar coordinate system placement influences the results of quantitative regional ventricular function assessment. It has also created new insight into how the appropriateness of the placement of such a polar coordinate systems can be objectively assessed. The validity of the conclusions drawn from the analysis of simulated ventricular shapes was validated through the analysis of outlines extracted from cine electron beam computed tomographic images. This was done using another interactive software tool developed specifically for this purpose. With this tool, the effects on regional function parameters of various choices for origin placement can be directly observed. This has proven to reinforce the conclusions drawn from the simulations and has led to the modification of the procedures used in our laboratory. Conclusions: The so-called floating coordinate systems are superior to fixed ones for quantification of regional left ventricular contraction in almost every respect. The use of regional ejection fractions with a coordinate system origin located at the centroid of the endocardial surface can lead to 180 degree errors in identifying the location of a myocardial infarction. This problem is less pronounced with midline and epicardium- based centroids and does not occur when the centroid of the myocardium is used. The quantified migration of myocardial mass across sector boundaries is a useful indicator of an inappropriate choice of coordinate system origin. When the centroid of the myocardium falls well within the ventricular cavity, as it usually does, it is a better location for the origin for regional analysis than any of the other centroids analyzed.

  2. Costs of Success: Financial Implications of Implementation of Active Learning in Introductory Physics Courses for Students and Administrators

    ERIC Educational Resources Information Center

    Brewe, Eric; Dou, Remy; Shand, Robert

    2018-01-01

    Although active learning is supported by strong evidence of efficacy in undergraduate science instruction, institutions of higher education have yet to embrace comprehensive change. Costs of transforming instruction are regularly cited as a key factor in not adopting active-learning instructional practices. Some cite that alternative methods to…

  3. Group-theoretical model of developed turbulence and renormalization of the Navier-Stokes equation.

    PubMed

    Saveliev, V L; Gorokhovski, M A

    2005-07-01

    On the basis of the Euler equation and its symmetry properties, this paper proposes a model of stationary homogeneous developed turbulence. A regularized averaging formula for the product of two fields is obtained. An equation for the averaged turbulent velocity field is derived from the Navier-Stokes equation by renormalization-group transformation.

  4. Application of Interactive Multimedia Tools in Teaching Mathematics--Examples of Lessons from Geometry

    ERIC Educational Resources Information Center

    Milovanovic, Marina; Obradovic, Jasmina; Milajic, Aleksandar

    2013-01-01

    This article presents the benefits and importance of using multimedia in the math classes by the selected examples of multimedia lessons from geometry (isometric transformations and regular polyhedra). The research included two groups of 50 first year students of the Faculty of the Architecture and the Faculty of Civil Construction Management.…

  5. Nearby Exo-Earth Astrometric Telescope (NEAT)

    NASA Technical Reports Server (NTRS)

    Shao, M.; Nemati, B.; Zhai, C.; Goullioud, R.

    2011-01-01

    NEAT (Nearby Exo ]Earths Astrometric Telescope) is a modest sized (1m diameter telescope) It will be capable of searching approx 100 nearby stars down to 1 Mearth planets in the habitable zone, and 200 @ 5 Mearth, 1AU. The concept addresses the major issues for ultra -precise astrometry: (1) Photon noise (0.5 deg dia field of view) (2) Optical errors (beam walk) with long focal length telescope (3) Focal plane errors , with laser metrology of the focal plane (4) PSF centroiding errors with measurement of the "True" PSF instead of using a "guess " of the true PSF, and correction for intra pixel QE non-uniformities. Technology "close" to complete. Focal plane geometry to 2e-5 pixels and centroiding to approx 4e -5 pixels.

  6. Crystal structure of 2-amino-pyridinium 6-chloro-nicotinate.

    PubMed

    Jasmine, N Jeeva; Rajam, A; Muthiah, P Thomas; Stanley, N; Razak, I Abdul; Rosli, M Mustaqim

    2015-09-01

    In the title salt, C5H7N(+)·C6H3ClNO(-), the 2-amino-pyri-din-ium cation inter-acts with the carboxyl-ate group of the 6-chloro-nicotinate anion through a pair of independent N-H⋯O hydrogen bonds, forming an R 2 (2)(8) ring motif. In the crystal, these dimeric units are connected further via N-H⋯O hydrogen bonds, forming chains along [001]. In addition, weak C-H⋯N and C-H⋯O hydrogen bonds, together with weak π-π inter-actions, with centroid-centroid distances of 3.6560 (5) and 3.6295 (5) Å, connect the chains, forming a two-dimensional network parallel to (100).

  7. Crystal structure of 8-hy-droxy-quinolin-ium 2-carboxy-6-nitro-benzoate mono-hydrate.

    PubMed

    Divya Bharathi, M; Ahila, G; Mohana, J; Chakkaravarthi, G; Anbalagan, G

    2015-04-01

    In the title hydrated salt, C9H8NO(+)·C8H4NO6 (-)·H2O, the deprotonated carboxyl-ate group is almost normal to its attached benzene ring [dihedral angle = 83.56 (8)°], whereas the protonated carboxyl-ate group is close to parallel [dihedral angle = 24.56 (9)°]. In the crystal, the components are linked by N-H⋯O and O-H⋯O hydrogen bonds, generating [001] chains. The packing is consolidated by C-H⋯O and π-π [centroid-to-centroid distances = 3.6408 (9) and 3.6507 (9) Å] inter-actions, which result in a three-dimensional network.

  8. A classical phase r-centroid approach to molecular wave packet dynamics illustrating the danger of using an incomplete set of initial states for thermal averaging

    NASA Astrophysics Data System (ADS)

    Hansson, Tony

    1999-08-01

    An inexpensive semiclassical method to simulate time-resolved pump-probe spectroscopy on molecular wave packets is applied to NaK molecules at high temperature. The method builds on the introduction of classical phase factors related to the r-centroids for vibronic transitions and assumes instantaneous laser-molecule interaction. All observed quantum mechanical features are reproduced - for short times where experimental data are available even quantitatively. Furthermore, it is shown that fully quantum dynamical molecular wave packet calculations on molecules at elevated temperatures, which do not include all rovibrational states, must be regarded with caution, as they easily might yield even qualitatively incorrect results.

  9. Crystal structure and Hirshfeld surface analysis of ethyl 2-{[4-ethyl-5-(quinolin-8-yloxymeth­yl)-4H-1,2,4-triazol-3-yl]sulfan­yl}acetate

    PubMed Central

    Bahoussi, Rawia Imane; Djafri, Ahmed; Djafri, Ayada

    2017-01-01

    In the title compound, C18H20N4O3S, the 1,2,4-triazole ring is twisted with respect to the mean plane of quinoline moiety at 65.24 (4)°. In the crystal, mol­ecules are linked by weak C—H⋯O and C—H⋯N hydrogen bonds, forming the three-dimensional supra­molecular packing. π–π stacking between the quinoline ring systems of neighbouring mol­ecules is also observed, the centroid-to-centroid distance being 3.6169 (6) Å. Hirshfeld surface (HS) analyses were performed. PMID:28217336

  10. Path-integral and Ornstein-Zernike study of quantum fluid structures on the crystallization line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sesé, Luis M., E-mail: msese@ccia.uned.es

    2016-03-07

    Liquid neon, liquid para-hydrogen, and the quantum hard-sphere fluid are studied with path integral Monte Carlo simulations and the Ornstein-Zernike pair equation on their respective crystallization lines. The results cover the whole sets of structures in the r-space and the k-space and, for completeness, the internal energies, pressures and isothermal compressibilities. Comparison with experiment is made wherever possible, and the possibilities of establishing k-space criteria for quantum crystallization based on the path-integral centroids are discussed. In this regard, the results show that the centroid structure factor contains two significant parameters related to its main peak features (amplitude and shape) thatmore » can be useful to characterize freezing.« less

  11. A monoclinic polymorph of (1E,5E)-1,5-bis-(2-hy-droxy-benzyl-idene)thio-carbono-hydrazide.

    PubMed

    Schmitt, Bonell; Gerber, Thomas; Hosten, Eric; Betz, Richard

    2011-08-01

    The title compound, C(15)H(14)N(4)O(2)S, is a derivative of thio-ureadihydrazide. In contrast to the previously reported polymorph (ortho-rhom-bic, space group Pbca, Z = 8), the current study revealed monoclinic symmetry (space group P2(1)/n, Z = 4). The mol-ecule shows non-crystallographic C(2) as well as approximate C(s) symmetry. Intra-molecular bifurcated O-H⋯(N,S) hydrogen bonds, are present. In the crystal, inter-molecular N-H⋯S hydrogen bonds and C-H⋯π contacts connect the mol-ecules into undulating chains along the b axis. The shortest centroid-centroid distance between two aromatic systems is 4.5285 (12) Å.

  12. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  13. 1,3-Bis(chloro-meth-yl)-2-methyl-5-nitro-benzene.

    PubMed

    Shao, Chang-Lun; Li, Chunyuan; Liu, Zhen; Wei, Mei-Yan; Wang, Chang-Yun

    2008-03-20

    The title compound, C(9)H(9)Cl(2)NO(2), is a natural product isolated from the endophytic fungus No. B77 of the mangrove tree from the South China Sea coast. In the crystal structure, the mol-ecules lie on twofold axes and form offset stacks through face-to-face π-π inter-actions. Adjacent mol-ecules in each stack are related by a centre of inversion and have an inter-planar separation of 3.53 (1) Å, with a centroid-centroid distance of 3.76 (1) Å. Between stacks, there are C-H⋯O inter-actions to the nitro groups and Cl⋯Cl contacts of 3.462 (1) Å.

  14. Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem

    NASA Astrophysics Data System (ADS)

    Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.

    2017-05-01

    In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.

  15. Urban remote sensing in areas of conflict: TerraSAR-X and Sentinel-1 change detection in the Middle East

    NASA Astrophysics Data System (ADS)

    Tapete, Deodato; Cigna, Francesca

    2016-08-01

    Timely availability of images of suitable spatial resolution, temporal frequency and coverage is currently one of the major technical constraints on the application of satellite SAR remote sensing for the conservation of heritage assets in urban environments that are impacted by human-induced transformation. TerraSAR-X and Sentinel-1A, in this regard, are two different models of SAR data provision: very high resolution on-demand imagery with end user-selected acquisition parameters, on one side, and freely accessible GIS-ready products with intended regular temporal coverage, on the other. What this means for change detection analyses in urban areas is demonstrated in this paper via the experiment over Homs, the third largest city of Syria with an history of settlement since 2300 BCE, where the impacts of the recent civil war combine with pre- and post-conflict urban transformation . The potential performance of Sentinel-1A StripMap scenes acquired in an emergency context is simulated via the matching StripMap beam mode offered by TerraSAR-X. Benefits and limitations of the different radar frequency band, spatial resolution and single/multi-channel polarization are discussed, as a proof-of-concept of regular monitoring currently achievable with space-borne SAR in historic urban settings. Urban transformation observed across Homs in 2009, 2014 and 2015 shows the impact of the Syrian conflict on the cityscape and proves that operator-driven interpretation is required to understand the complexity of multiple and overlapping urban changes.

  16. Partial Data Traces: Efficient Generation and Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, F; De Supinski, B R; McKee, S A

    2001-08-20

    Binary manipulation techniques are increasing in popularity. They support program transformations tailored toward certain program inputs, and these transformations have been shown to yield performance gains beyond the scope of static code optimizations without profile-directed feedback. They even deliver moderate gains in the presence of profile-guided optimizations. In addition, transformations can be performed on the entire executable, including library routines. This work focuses on program instrumentation, yet another application of binary manipulation. This paper reports preliminary results on generating partial data traces through dynamic binary rewriting. The contributions are threefold. First, a portable method for extracting precise data traces formore » partial executions of arbitrary applications is developed. Second, a set of hierarchical structures for compactly representing these accesses is developed. Third, an efficient online algorithm to detect regular accesses is introduced. The authors utilize dynamic binary rewriting to selectively collect partial address traces of regions within a program. This allows partial tracing of hot paths for only a short time during program execution in contrast to static rewriting techniques that lack hot path detection and also lack facilities to limit the duration of data collection. Preliminary results show reductions of three orders of a magnitude of inline instrumentation over a dual process approach involving context switching. They also report constant size representations for regular access patters in nested loops. These efforts are part of a larger project to counter the increasing gap between processor and main memory speeds by means of software optimization and hardware enhancements.« less

  17. Infrared small target enhancement: grey level mapping based on improved sigmoid transformation and saliency histogram

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian

    2018-06-01

    Infrared (IR) small target enhancement plays a significant role in modern infrared search and track (IRST) systems and is the basic technique of target detection and tracking. In this paper, a coarse-to-fine grey level mapping method using improved sigmoid transformation and saliency histogram is designed to enhance IR small targets under different backgrounds. For the stage of rough enhancement, the intensity histogram is modified via an improved sigmoid function so as to narrow the regular intensity range of background as much as possible. For the part of further enhancement, a linear transformation is accomplished based on a saliency histogram constructed by averaging the cumulative saliency values provided by a saliency map. Compared with other typical methods, the presented method can achieve both better visual performances and quantitative evaluations.

  18. Disruptive technologies and force transformation: a Canadian perspective (Keynote Address)

    NASA Astrophysics Data System (ADS)

    Moen, Ingar O.; Walker, Robert S.

    2005-05-01

    Transformation of Canada"s military forces is being pursued to ensure their relevancy and impact in light of the new defence and security environment. This environment is characterized by an increasingly complex spectrum of military operations spanning pre- and post-conflict, the emergence of an asymmetric threat that differs substantially from the peer-on-peer threat of the Cold War, and the globalization of science and technology. Disruptive technologies - those that have a profound impact on established practice - are increasingly shaping both the civil and military sectors, with advances in one sector now regularly seeding disruptions in the other. This paper postulates the likely sources of disruptive technologies over the next 10-20 years. It then looks at how science and technology investments can contribute to force transformation either to take advantage of or mitigate the effects of these disruptions.

  19. Waveform inversion in the frequency domain for the simultaneous determination of earthquake source mechanism and moment function

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Kumagai, H.; Inoue, H.

    2008-06-01

    We propose a method of waveform inversion to rapidly and routinely estimate both the moment function and the centroid moment tensor (CMT) of an earthquake. In this method, waveform inversion is carried out in the frequency domain to obtain the moment function more rapidly than when solved in the time domain. We assume a pure double-couple source mechanism in order to stabilize the solution when using data from a small number of seismic stations. The fault and slip orientations are estimated by a grid search with respect to the strike, dip and rake angles. The moment function in the time domain is obtained from the inverse Fourier transform of the frequency components determined by the inversion. Since observed waveforms used for the inversion are limited in a particular frequency band, the estimated moment function is a bandpassed form. We develop a practical approach to estimate the deconvolved form of the moment function, from which we can reconstruct detailed rupture history and the seismic moment. The source location is determined by a spatial grid search using adaptive grid spacings, which are gradually decreased in each step of the search. We apply this method to two events that occurred in Indonesia by using data from a broad-band seismic network in Indonesia (JISNET): one northeast of Sulawesi (Mw = 7.5) on 2007 January 21, and the other south of Java (Mw = 7.5) on 2006 July 17. The source centroid locations and mechanisms we estimated for both events are consistent with those determined by the Global CMT Project and the National Earthquake Information Center of the U.S. Geological Survey. The estimated rupture duration of the Sulawesi event is 16 s, which is comparable to a typical duration for earthquakes of this magnitude, while that of the Java event is anomalously long (176 s), suggesting that this event was a tsunami earthquake. Our application demonstrates that this inversion method has great potential for rapid and routine estimations of both the CMT and the moment function, and may be useful for identification of tsunami earthquakes.

  20. An Adaptive MR-CT Registration Method for MRI-guided Prostate Cancer Radiotherapy

    PubMed Central

    Zhong, Hualiang; Wen, Ning; Gordon, James; Elshaikh, Mohamed A; Movsas, Benjamin; Chetty, Indrin J.

    2015-01-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ/cm3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for development of high-quality MRI-guided radiation therapy. PMID:25775937

  1. An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.

    2015-04-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for the development of high-quality MRI-guided radiation therapy.

  2. GC and GC-MS characterization of crude oil transformation in sediments and microbial mat samples after the 1991 oil spill in the Saudi Arabian Gulf coast.

    PubMed

    Garcia de Oteyza, T; Grimalt, J O

    2006-02-01

    The massive oil discharge in the Saudi Arabian coast at the end of the 1991 Gulf War is used here as a natural experiment to study the ability of microbial mats to transform oil residues after major spills. The degree of oil transformation has been evaluated from the analysis of the aliphatic and aromatic hydrocarbons by gas chromatography (GC) and GC coupled to mass spectrometry (GC-MS). The oil-polluted microbial mat samples from coastal environments exhibited an intermediate degree of transformation between that observed in superficial and deep sediments. Evaporation, photo-oxidation and water-washing seemed to lead to more effective and rapid elimination of hydrocarbons than cyanobacteria and its associated microorganisms. Furthermore, comparison of some compounds (e.g. regular isoprenoid hydrocarbons or alkylnaphthalenes) in the oil collected in the area after the spill or in the mixtures retained by cyanobacterial growth gave rise to an apparent effect of hydrocarbon preservation in the microbial mat ecosystems.

  3. Selection of common bean genotypes for the Cerrado/Pantanal ecotone via mixed models and multivariate analysis.

    PubMed

    Corrêa, A M; Pereira, M I S; de Abreu, H K A; Sharon, T; de Melo, C L P; Ito, M A; Teodoro, P E; Bhering, L L

    2016-10-17

    The common bean, Phaseolus vulgaris, is predominantly grown on small farms and lacks accurate genotype recommendations for specific micro-regions in Brazil. This contributes to a low national average yield. The aim of this study was to use the methods of the harmonic mean of the relative performance of genetic values (HMRPGV) and the centroid, for selecting common bean genotypes with high yield, adaptability, and stability for the Cerrado/Pantanal ecotone region in Brazil. We evaluated 11 common bean genotypes in three trials carried out in the dry season in Aquidauana in 2013, 2014, and 2015. A likelihood ratio test detected a significant interaction between genotype x year, contributing 54% to the total phenotypic variation in grain yield. The three genotypes selected by the joint analysis of genotypic values in all years (Carioca Precoce, BRS Notável, and CNFC 15875) were the same as those recommended by the HMRPGV method. Using the centroid method, genotypes BRS Notável and CNFC 15875 were considered ideal genotypes based on their high stability to unfavorable environments and high responsiveness to environmental improvement. We identified a high association between the methods of adaptability and stability used in this study. However, the use of centroid method provided a more accurate and precise recommendation of the behavior of the evaluated genotypes.

  4. Contaminant Gradients in Trees: Directional Tree Coring Reveals Boundaries of Soil and Soil-Gas Contamination with Potential Applications in Vapor Intrusion Assessment.

    PubMed

    Wilson, Jordan L; Samaranayake, V A; Limmer, Matthew A; Schumacher, John G; Burken, Joel G

    2017-12-19

    Contaminated sites pose ecological and human-health risks through exposure to contaminated soil and groundwater. Whereas we can readily locate, monitor, and track contaminants in groundwater, it is harder to perform these tasks in the vadose zone. In this study, tree-core samples were collected at a Superfund site to determine if the sample-collection location around a particular tree could reveal the subsurface location, or direction, of soil and soil-gas contaminant plumes. Contaminant-centroid vectors were calculated from tree-core data to reveal contaminant distributions in directional tree samples at a higher resolution, and vectors were correlated with soil-gas characterization collected using conventional methods. Results clearly demonstrated that directional tree coring around tree trunks can indicate gradients in soil and soil-gas contaminant plumes, and the strength of the correlations were directly proportionate to the magnitude of tree-core concentration gradients (spearman's coefficient of -0.61 and -0.55 in soil and tree-core gradients, respectively). Linear regression indicates agreement between the concentration-centroid vectors is significantly affected by in planta and soil concentration gradients and when concentration centroids in soil are closer to trees. Given the existing link between soil-gas and vapor intrusion, this study also indicates that directional tree coring might be applicable in vapor intrusion assessment.

  5. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  6. Contaminant gradients in trees: Directional tree coring reveals boundaries of soil and soil-gas contamination with potential applications in vapor intrusion assessment

    USGS Publications Warehouse

    Wilson, Jordan L.; Samaranayake, V.A.; Limmer, Matthew A.; Schumacher, John G.; Burken, Joel G.

    2017-01-01

    Contaminated sites pose ecological and human-health risks through exposure to contaminated soil and groundwater. Whereas we can readily locate, monitor, and track contaminants in groundwater, it is harder to perform these tasks in the vadose zone. In this study, tree-core samples were collected at a Superfund site to determine if the sample-collection location around a particular tree could reveal the subsurface location, or direction, of soil and soil-gas contaminant plumes. Contaminant-centroid vectors were calculated from tree-core data to reveal contaminant distributions in directional tree samples at a higher resolution, and vectors were correlated with soil-gas characterization collected using conventional methods. Results clearly demonstrated that directional tree coring around tree trunks can indicate gradients in soil and soil-gas contaminant plumes, and the strength of the correlations were directly proportionate to the magnitude of tree-core concentration gradients (spearman’s coefficient of -0.61 and -0.55 in soil and tree-core gradients, respectively). Linear regression indicates agreement between the concentration-centroid vectors is significantly affected by in-planta and soil concentration gradients and when concentration centroids in soil are closer to trees. Given the existing link between soil-gas and vapor intrusion, this study also indicates that directional tree coring might be applicable in vapor intrusion assessment.

  7. FACTORING TO FIT OFF DIAGONALS.

    DTIC Science & Technology

    imply an upper bound on the number of factors. When applied to somatotype data, the method improved substantially on centroid solutions and indicated a reinterpretation of earlier factoring studies. (Author)

  8. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    NASA Astrophysics Data System (ADS)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  9. "Great Location, Beautiful Surroundings!" Making Sense of Information Materials Intended as Guidance for School Choice

    ERIC Educational Resources Information Center

    Johnsson, Mattias; Lindgren, Joakim

    2010-01-01

    Following international trends during the last decades of the 20th century mechanisms of marketization, freedom of choice, and competition were introduced into the Swedish compulsory school system, thereby transforming it into one of the most de-regularized in the world. The overall aim of the pilot study presented here is to shed light on a…

  10. Does the Discourse of Employer Linked Charter Schools Signal a Commitment to Work Force Development or Transformational Learning?

    ERIC Educational Resources Information Center

    Freeman, Eric; Lakes, Richard D.

    2005-01-01

    The latest model for educational reform emerging in the US vocational-technical delivery system is the employer linked charter school (ELCS). This emerging concept is viewed as a partnership between constituents in the regular school organization and employers who are directly involved in the school's design, governance, and delivery of learning…

  11. The War on Poverty Must Be Won: Transformative Leaders Can Make a Difference

    ERIC Educational Resources Information Center

    Shields, Carolyn M.

    2014-01-01

    According to reports, almost one billion children worldwide live in poverty, many of whom find it difficult to attend school on a regular basis. Moreover, when they are able to attend, they too often find themselves unable to succeed, falling farther and farther behind their more affluent peers. By attending to a number of relevant research…

  12. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

    2008-07-01

    Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

  13. Determination of fundamental asteroseismic parameters using the Hilbert transform

    NASA Astrophysics Data System (ADS)

    Kiefer, René; Schad, Ariane; Herzberg, Wiebke; Roth, Markus

    2015-06-01

    Context. Solar-like oscillations exhibit a regular pattern of frequencies. This pattern is dominated by the small and large frequency separations between modes. The accurate determination of these parameters is of great interest, because they give information about e.g. the evolutionary state and the mass of a star. Aims: We want to develop a robust method to determine the large and small frequency separations for time series with low signal-to-noise ratio. For this purpose, we analyse a time series of the Sun from the GOLF instrument aboard SOHO and a time series of the star KIC 5184732 from the NASA Kepler satellite by employing a combination of Fourier and Hilbert transform. Methods: We use the analytic signal of filtered stellar oscillation time series to compute the signal envelope. Spectral analysis of the signal envelope then reveals frequency differences of dominant modes in the periodogram of the stellar time series. Results: With the described method the large frequency separation Δν can be extracted from the envelope spectrum even for data of poor signal-to-noise ratio. A modification of the method allows for an overview of the regularities in the periodogram of the time series.

  14. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    NASA Astrophysics Data System (ADS)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  15. Influence of viscous dissipation on a copper oxide nanofluid in an oblique channel: Implementation of the KKL model

    NASA Astrophysics Data System (ADS)

    Ahmed, Naveed; Adnan; Khan, Umar; Mohyud-Din, Syed Tauseef; Manzoor, Raheela

    2017-05-01

    This paper aims to study the flow of a nanofluid in the presence of viscous dissipation in an oblique channel (nonparallel plane walls). For thermal conductivity of the nanofluid, the KKL model is utilized. Water is taken as the base fluid and it is assumed to be containing the solid nanoparticles of copper oxide. The appropriate set of partial differential equations is transformed into a self-similar system with the help of feasible similarity transformations. The solution of the model is obtained analytically and to ensure the validity of analytical solutions, numerically one is also calculated. The homotopy analysis method (HAM) and the Runge-Kutta numerical method (coupled with shooting techniques) have been employed for the said purpose. The influence of the different flow parameters in the model on velocity, thermal field, skin friction coefficient and local rate of heat transfer has been discussed with the help of graphs. Furthermore, graphical comparison between the local rate of heat transfer in regular fluids and nanofluids has been made which shows that in case of nanofluids, heat transfer is rapid as compared to regular fluids.

  16. Two-layer contractive encodings for learning stable nonlinear features.

    PubMed

    Schulz, Hannes; Cho, Kyunghyun; Raiko, Tapani; Behnke, Sven

    2015-04-01

    Unsupervised learning of feature hierarchies is often a good strategy to initialize deep architectures for supervised learning. Most existing deep learning methods build these feature hierarchies layer by layer in a greedy fashion using either auto-encoders or restricted Boltzmann machines. Both yield encoders which compute linear projections of input followed by a smooth thresholding function. In this work, we demonstrate that these encoders fail to find stable features when the required computation is in the exclusive-or class. To overcome this limitation, we propose a two-layer encoder which is less restricted in the type of features it can learn. The proposed encoder is regularized by an extension of previous work on contractive regularization. This proposed two-layer contractive encoder potentially poses a more difficult optimization problem, and we further propose to linearly transform hidden neurons of the encoder to make learning easier. We demonstrate the advantages of the two-layer encoders qualitatively on artificially constructed datasets as well as commonly used benchmark datasets. We also conduct experiments on a semi-supervised learning task and show the benefits of the proposed two-layer encoders trained with the linear transformation of perceptrons. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. The Replicator Equation on Graphs

    PubMed Central

    Ohtsuki, Hisashi; Nowak, Martin A.

    2008-01-01

    We study evolutionary games on graphs. Each player is represented by a vertex of the graph. The edges denote who meets whom. A player can use any one of n strategies. Players obtain a payoff from interaction with all their immediate neighbors. We consider three different update rules, called ‘birth-death’, ‘death-birth’ and ‘imitation’. A fourth update rule, ‘pairwise comparison’, is shown to be equivalent to birth-death updating in our model. We use pair-approximation to describe the evolutionary game dynamics on regular graphs of degree k. In the limit of weak selection, we can derive a differential equation which describes how the average frequency of each strategy on the graph changes over time. Remarkably, this equation is a replicator equation with a transformed payoff matrix. Therefore, moving a game from a well-mixed population (the complete graph) onto a regular graph simply results in a transformation of the payoff matrix. The new payoff matrix is the sum of the original payoff matrix plus another matrix, which describes the local competition of strategies. We discuss the application of our theory to four particular examples, the Prisoner’s Dilemma, the Snow-Drift game, a coordination game and the Rock-Scissors-Paper game. PMID:16860343

  18. Determination of heat transfer parameters by use of finite integral transform and experimental data for regular geometric shapes

    NASA Astrophysics Data System (ADS)

    Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad

    2017-12-01

    This article offers a study on estimation of heat transfer parameters (coefficient and thermal diffusivity) using analytical solutions and experimental data for regular geometric shapes (such as infinite slab, infinite cylinder, and sphere). Analytical solutions have a broad use in experimentally determining these parameters. Here, the method of Finite Integral Transform (FIT) was used for solutions of governing differential equations. The temperature change at centerline location of regular shapes was recorded to determine both the thermal diffusivity and heat transfer coefficient. Aluminum and brass were used for testing. Experiments were performed for different conditions such as in a highly agitated water medium ( T = 52 °C) and in air medium ( T = 25 °C). Then, with the known slope of the temperature ratio vs. time curve and thickness of slab or radius of the cylindrical or spherical materials, thermal diffusivity value and heat transfer coefficient may be determined. According to the method presented in this study, the estimated of thermal diffusivity of aluminum and brass is 8.395 × 10-5 and 3.42 × 10-5 for a slab, 8.367 × 10-5 and 3.41 × 10-5 for a cylindrical rod and 8.385 × 10-5 and 3.40 × 10-5 m2/s for a spherical shape, respectively. The results showed there is close agreement between the values estimated here and those already published in the literature. The TAAD% is 0.42 and 0.39 for thermal diffusivity of aluminum and brass, respectively.

  19. Optimum parameters of image preprocessing method for Shack-Hartmann wavefront sensor in different SNR condition

    NASA Astrophysics Data System (ADS)

    Wei, Ping; Li, Xinyang; Luo, Xi; Li, Jianfeng

    2018-02-01

    The centroid method is commonly adopted to locate the spot in the sub-apertures in the Shack-Hartmann wavefront sensor (SH-WFS), in which preprocessing image is required before calculating the spot location due to that the centroid method is extremely sensitive to noises. In this paper, the SH-WFS image was simulated according to the characteristics of the noises, background and intensity distribution. The Optimal parameters of SH-WFS image preprocessing method were put forward, in different signal-to-noise ratio (SNR) conditions, where the wavefront reconstruction error was considered as the evaluation index. Two methods of image preprocessing, thresholding method and windowing combing with thresholding method, were compared by studying the applicable range of SNR and analyzing the stability of the two methods, respectively.

  20. Kinematic model for the space-variant image motion of star sensors under dynamical conditions

    NASA Astrophysics Data System (ADS)

    Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun

    2015-06-01

    A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.

  1. 2,2'-[2,4-Bis(naphthalen-1-yl)cyclo-butane-1,3-di-yl]bis-(1-methyl-pyridinium) diiodide: thermal-induced [2 + 2] cyclo-addition reaction of a heterostilbene.

    PubMed

    Chantrapromma, Suchada; Chanawanno, Kullapa; Boonnak, Nawong; Fun, Hoong-Kun

    2012-01-01

    The asymmetric unit of the title compound, C(36)H(32)N(2) (2+)·2I(-), consists of one half-mol-ecule of the cation and one I(-) anion. The cation is located on an inversion centre. The dihedral angle between the pyridinium ring and the naphthalene ring system in the asymmetric unit is 19.01 (14)°. In the crystal, the cations and the anions are linked by C-H⋯I inter-actions into a layer parallel to the bc plane. Intra- and inter-molecular π-π inter-actions with centroid-centroid distances of 3.533 (2)-3.807 (2) Å are also observed.

  2. CCD centroiding experiment for JASMINE and ILOM

    NASA Astrophysics Data System (ADS)

    Yano, Taihei; Araki, Hiroshi; Gouda, Naoteru; Kobayashi, Yukiyasu; Tsujimoto, Takuji; Nakajima, Tadashi; Kawano, Nobuyuki; Tazawa, Seiichi; Yamada, Yoshiyuki; Hanada, Hideo; Asari, Kazuyoshi; Tsuruta, Seiitsu

    2006-06-01

    JASMINE and ILOM are space missions which are in progress at the National Astronomical Observatory of Japan. These two projects need a common astrometric technique to obtain precise positions of star images on solid state detectors to accomplish the objectives. We have carried out measurements of centroid of artificial star images on a CCD to investigate the accuracy of the positions of the stars, using an algorithm for estimating them from photon weighted means of the stars. We find that the accuracy of the star positions reaches 1/300 pixel for one measurement. We also measure positions of stars, using an algorithm for correcting the distorted optical image. Finally, we find that the accuracy of the measurement for the positions of the stars from the strongly distorted image is under 1/150 pixel for one measurement.

  3. Crystal structure of 8-hy­droxy­quinolin­ium 2-carboxy-6-nitro­benzoate mono­hydrate

    PubMed Central

    Divya Bharathi, M.; Ahila, G.; Mohana, J.; Chakkaravarthi, G.; Anbalagan, G.

    2015-01-01

    In the title hydrated salt, C9H8NO+·C8H4NO6 −·H2O, the deprotonated carboxyl­ate group is almost normal to its attached benzene ring [dihedral angle = 83.56 (8)°], whereas the protonated carboxyl­ate group is close to parallel [dihedral angle = 24.56 (9)°]. In the crystal, the components are linked by N—H⋯O and O—H⋯O hydrogen bonds, generating [001] chains. The packing is consolidated by C—H⋯O and π–π [centroid-to-centroid distances = 3.6408 (9) and 3.6507 (9) Å] inter­actions, which result in a three-dimensional network. PMID:26029446

  4. ACE: Automatic Centroid Extractor for real time target tracking

    NASA Technical Reports Server (NTRS)

    Cameron, K.; Whitaker, S.; Canaris, J.

    1990-01-01

    A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.

  5. Crystal structure of 2-(1,3-dioxoindan-2-yl)iso-quinoline-1,3,4-trione.

    PubMed

    Ghalib, Raza Murad; Chidan Kumar, C S; Hashim, Rokiah; Sulaiman, Othman; Fun, Hoong-Kun

    2015-01-01

    In the title iso-quinoline-1,3,4-trione derivative, C18H9NO5, the five-membered ring of the indane fragment adopts an envelope conformation with the nitro-gen-substituted C atom being the flap. The planes of the indane benzene ring and the iso-quinoline-1,3,4-trione ring make a dihedral angle of 82.06 (6)°. In the crystal, mol-ecules are linked into chains extending along the bc plane via C-H⋯O hydrogen-bonding inter-actions, enclosing R 2 (2)(8) and R 2 (2)(10) loops. The chains are further connected by π-π stacking inter-ations, with centroid-to-centroid distances of 3.9050 (7) Å, forming layers parallel to the b axis.

  6. Nonlinear Motion Tracking by Deep Learning Architecture

    NASA Astrophysics Data System (ADS)

    Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.

    2018-03-01

    In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.

  7. Regular scattering patterns from near-cloaking devices and their implications for invisibility cloaking

    NASA Astrophysics Data System (ADS)

    Kocyigit, Ilker; Liu, Hongyu; Sun, Hongpeng

    2013-04-01

    In this paper, we consider invisibility cloaking via the transformation optics approach through a ‘blow-up’ construction. An ideal cloak makes use of singular cloaking material. ‘Blow-up-a-small-region’ construction and ‘truncation-of-singularity’ construction are introduced to avoid the singular structure, however, giving only near-cloaks. The study in the literature is to develop various mechanisms in order to achieve high-accuracy approximate near-cloaking devices, and also from a practical viewpoint to nearly cloak an arbitrary content. We study the problem from a different viewpoint. It is shown that for those regularized cloaking devices, the corresponding scattering wave fields due to an incident plane wave have regular patterns. The regular patterns are both a curse and a blessing. On the one hand, the regular wave pattern betrays the location of a cloaking device which is an intrinsic defect due to the ‘blow-up’ construction, and this is particularly the case for the construction by employing a high-loss layer lining. Indeed, our numerical experiments show robust reconstructions of the location, even by implementing the phaseless cross-section data. The construction by employing a high-density layer lining shows a certain promising feature. On the other hand, it is shown that one can introduce an internal point source to produce the canceling scattering pattern to achieve a near-cloak of an arbitrary order of accuracy.

  8. A tale of four practices.

    PubMed

    Miller-Day, Michelle; Applequist, Janelle; Zabokrtsky, Keri; Dalton, Alexandra; Kellom, Katherine; Gabbay, Robert; Cronholm, Peter F

    2017-09-18

    Purpose The Patient-Centered Medical Home (PCMH) has become a dominant model of primary care re-design. This transformation presents a challenge to many care delivery organizations. The purpose of this paper is to describe attributes shaping successful and unsuccessful practice transformation within four medical practice groups. Design/methodology/approach As part of a larger study of 25 practices transitioning into a PCMH, the current study focused on diabetes care and identified high- and low-improvement medical practices in terms of quantitative patient measures of glycosylated hemoglobin and qualitative assessments of practice performance. A subset of the top two high-improvement and bottom two low-improvement practices were identified as comparison groups. Semi-structured interviews were conducted with diverse personnel at these practices to investigate their experiences with practice transformation and data were analyzed using analytic induction. Findings Results show a variety of key attributes facilitating more successful PCMH transformation, such as empanelment, shared goals and regular meetings, and a clear understanding of PCMH transformation purposes, goals, and benefits, providing care/case management services, and facilitating patient reminders. Several barriers also exist to successful transformation, such as low levels of resources to handle financial expense, lack of understanding PCMH transformation purposes, goals, and benefits, inadequate training and management of technology, and low team cohesion. Originality/value Few studies qualitatively compare and contrast high and low performing practices to illuminate the experience of practice transformation. These findings highlight the experience of organizational members and their challenges in practice transformation while providing quality diabetes care.

  9. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate

  10. Multi-scale coarse-graining of non-conservative interactions in molecular liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izvekov, Sergei, E-mail: sergiy.izvyekov.civ@mail.mil; Rice, Betsy M.

    2014-03-14

    A new bottom-up procedure for constructing non-conservative (dissipative and stochastic) interactions for dissipative particle dynamics (DPD) models is described and applied to perform hierarchical coarse-graining of a polar molecular liquid (nitromethane). The distant-dependent radial and shear frictions in functional-free form are derived consistently with a chosen form for conservative interactions by matching two-body force-velocity and three-body velocity-velocity correlations along the microscopic trajectories of the centroids of Voronoi cells (clusters), which represent the dissipative particles within the DPD description. The Voronoi tessellation is achieved by application of the K-means clustering algorithm at regular time intervals. Consistently with a notion of many-bodymore » DPD, the conservative interactions are determined through the multi-scale coarse-graining (MS-CG) method, which naturally implements a pairwise decomposition of the microscopic free energy. A hierarchy of MS-CG/DPD models starting with one molecule per Voronoi cell and up to 64 molecules per cell is derived. The radial contribution to the friction appears to be dominant for all models. As the Voronoi cell sizes increase, the dissipative forces rapidly become confined to the first coordination shell. For Voronoi cells of two and more molecules the time dependence of the velocity autocorrelation function becomes monotonic and well reproduced by the respective MS-CG/DPD models. A comparative analysis of force and velocity correlations in the atomistic and CG ensembles indicates Markovian behavior with as low as two molecules per dissipative particle. The models with one and two molecules per Voronoi cell yield transport properties (diffusion and shear viscosity) that are in good agreement with the atomistic data. The coarser models produce slower dynamics that can be appreciably attributed to unaccounted dissipation introduced by regular Voronoi re-partitioning as well as by larger numerical errors in mapping out the dissipative forces. The framework presented herein can be used to develop computational models of real liquids which are capable of bridging the atomistic and mesoscopic scales.« less

  11. Assembling the Streptococcus thermophilus clustered regularly interspaced short palindromic repeats (CRISPR) array for multiplex DNA targeting.

    PubMed

    Guo, Lijun; Xu, Kun; Liu, Zhiyuan; Zhang, Cunfang; Xin, Ying; Zhang, Zhiying

    2015-06-01

    In addition to the advantages of scalable, affordable, and easy to engineer, the clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein (Cas) technology is superior for multiplex targeting, which is laborious and inconvenient when achieved by cloning multiple gRNA expressing cassettes. Here, we report a simple CRISPR array assembling method which will facilitate multiplex targeting usage. First, the Streptococcus thermophilus CRISPR3/Cas locus was cloned. Second, different CRISPR arrays were assembled with different crRNA spacers. Transformation assays using different Escherichia coli strains demonstrated efficient plasmid DNA targeting, and we achieved targeting efficiency up to 95% with an assembled CRISPR array with three crRNA spacers. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Representation of viruses in the remediated PDB archive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawson, Catherine L., E-mail: cathy.lawson@rutgers.edu; Dutta, Shuchismita; Westbrook, John D.

    2008-08-01

    A new data model for PDB entries of viruses and other biological assemblies with regular noncrystallographic symmetry is described. A new scheme has been devised to represent viruses and other biological assemblies with regular noncrystallographic symmetry in the Protein Data Bank (PDB). The scheme describes existing and anticipated PDB entries of this type using generalized descriptions of deposited and experimental coordinate frames, symmetry and frame transformations. A simplified notation has been adopted to express the symmetry generation of assemblies from deposited coordinates and matrix operations describing the required point, helical or crystallographic symmetry. Complete correct information for building full assemblies,more » subassemblies and crystal asymmetric units of all virus entries is now available in the remediated PDB archive.« less

  13. Strategies to avoid false negative findings in residue analysis using liquid chromatography coupled to time-of-flight mass spectrometry.

    PubMed

    Kaufmann, Anton; Butcher, Patrick

    2006-01-01

    Liquid chromatography coupled to orthogonal acceleration time-of-flight mass spectrometry (LC/TOF) provides an attractive alternative to liquid chromatography coupled to triple quadrupole mass spectrometry (LC/MS/MS) in the field of multiresidue analysis. The sensitivity and selectivity of LC/TOF approach those of LC/MS/MS. TOF provides accurate mass information and a significantly higher mass resolution than quadrupole analyzers. The available mass resolution of commercial TOF instruments ranging from 10 000 to 18 000 full width at half maximum (FWHM) is not, however, sufficient to completely exclude the problem of isobaric interferences (co-elution of analyte ions with matrix compounds of very similar mass). Due to the required data storage capacity, TOF raw data is commonly centroided before being electronically stored. However, centroiding can lead to a loss of data quality. The co-elution of a low intensity analyte peak with an isobaric, high intensity matrix compound can cause problems. Some centroiding algorithms might not be capable of deconvoluting such partially merged signals, leading to incorrect centroids.Co-elution of isobaric compounds has been deliberately simulated by injecting diluted binary mixtures of isobaric model substances at various relative intensities. Depending on the mass differences between the two isobaric compounds and the resolution provided by the TOF instrument, significant deviations in exact mass measurements and signal intensities were observed. The extraction of a reconstructed ion chromatogram based on very narrow mass windows can even result in the complete loss of the analyte signal. Guidelines have been proposed to avoid such problems. The use of sub-2 microm HPLC packing materials is recommended to improve chromatographic resolution and to reduce the risk of co-elution. The width of the extraction mass windows for reconstructed ion chromatograms should be defined according to the resolution of the TOF instrument. Alternative approaches include the spiking of the sample with appropriate analyte concentrations. Furthermore, enhanced software, capable of deconvoluting partially merged mass peaks, may become available. Copyright (c) 2006 John Wiley & Sons, Ltd.

  14. Using molecular principal axes for structural comparison: determining the tertiary changes of a FAB antibody domain induced by antigenic binding

    PubMed Central

    Silverman, B David

    2007-01-01

    Background Comparison of different protein x-ray structures has previously been made in a number of different ways; for example, by visual examination, by differences in the locations of secondary structures, by explicit superposition of structural elements, e.g. α-carbon atom locations, or by procedures that utilize a common symmetry element or geometrical feature of the structures to be compared. Results A new approach is applied to determine the structural changes that an antibody protein domain experiences upon its interaction with an antigenic target. These changes are determined with the use of two different, however comparable, sets of principal axes that are obtained by diagonalizing the second-order tensors that yield the moments-of-geometry as well as an ellipsoidal characterization of domain shape, prior to and after interaction. Determination of these sets of axes for structural comparison requires no internal symmetry features of the domains, depending solely upon their representation in three-dimensional space. This representation may involve atomic, Cα, or residue centroid coordinates. The present analysis utilizes residue centroids. When the structural changes are minimal, the principal axes of the domains, prior to and after interaction, are essentially comparable and consequently may be used for structural comparison. When the differences of the axes cannot be neglected, but are nevertheless slight, a smaller relatively invariant substructure of the domains may be utilized for comparison. The procedure yields two distance metrics for structural comparison. First, the displacements of the residue centroids due to antigenic binding, referenced to the ellipsoidal principal axes, are noted. Second, changes in the ellipsoidal distances with respect to the non-interacting structure provide a direct measure of the spatial displacements of the residue centroids, towards either the interior or exterior of the domain. Conclusion With use of x-ray data from the protein data bank (PDB), these two metrics are shown to highlight, in a manner different from before, the structural changes that are induced in the overall domains as well as in the H3 loops of the complementarity-determining regions (CDR) upon FAB antibody binding to a truncated and to a synthetic hemagglutinin viral antigenic target. PMID:17996091

  15. SU-F-J-142: Proposed Method to Broaden Inclusion Potential of Patients Able to Use the Calypso Tracking System in Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiedler, D; Kuo, H; Bodner, W

    2016-06-15

    Purpose: To introduce a non-standard method of patient setup, using BellyBoard immobilization, to better utilize the localization and tracking potential of an RF-beacon system with EBRT for prostate cancer. Methods: An RF-beacon phantom was imaged using a wide bore CT scanner, both in a standard level position and with a known rotation (4° pitch and 7.5° yaw). A commercial treatment planning system (TPS) was used to determine positional coordinates of each beacon, and the centroid of the three beacons for both setups. For each setup at the Linac, kV AP and Rt Lateral images were obtained. A full characterization ofmore » the RF-beacon system in clinical mode was completed for various beacons’ array-to-centroid distances, which includes vertical, lateral, and longitudinal offset data, as well as pitch and yaw offset measurements for the tilted phantom. For the single patient who has been setup using the proposed BellyBoard method, a supine simulation was first obtained. When abdominal protrusion was found to be exceeding the limits of the RF-Beacon system through distance-based analysis in the TPS, the patient is re-simulated prone with the BellyBoard. Array to centroid distance is measured again in the TPS, and if found to be within the localization or tracking region it is applied. Results: Characterization of limitations for the RF-beacon system in clinical mode showed acceptable consistency of offset determination for phantom setup accuracy. The nonstandard patient setup method reduced the beacons’ centroid-to-array distance by 8.32cm, from 25.13cm to 16.81cm; completely out of tracking range (greater than 20cm) to within setup tracking range (less than 20cm). Conclusion: Using the RF-beacon system in combination with this novel patient setup can allow patients who would otherwise not be candidates for beacon enhanced EBRT to now be able to benefit from the reduced PTV margins of this treatment method.« less

  16. [Non-rigid medical image registration based on mutual information and thin-plate spline].

    PubMed

    Cao, Guo-gang; Luo, Li-min

    2009-01-01

    To get precise and complete details, the contrast in different images is needed in medical diagnosis and computer assisted treatment. The image registration is the basis of contrast, but the regular rigid registration does not satisfy the clinic requirements. A non-rigid medical image registration method based on mutual information and thin-plate spline was present. Firstly, registering two images globally based on mutual information; secondly, dividing reference image and global-registered image into blocks and registering them; then getting the thin-plate spline transformation according to the shift of blocks' center; finally, applying the transformation to the global-registered image. The results show that the method is more precise than the global rigid registration based on mutual information and it reduces the complexity of getting control points and satisfy the clinic requirements better by getting control points of the thin-plate transformation automatically.

  17. Development of human epithelial cell systems for radiation risk assessment

    NASA Astrophysics Data System (ADS)

    Yang, C. H.; Craise, L. M.

    1994-10-01

    The most important health effect of space radiation for astronauts is cancer induction. For radiation risk assessment, an understanding of carcinogenic effect of heavy ions in human cells is most essential. In our laboratory, we have successfully developed a human mammary epithelial cell system for studying the neoplastic transformation in vitro. Growth variants were obtained from heavy ion irradiated immortal mammary cell line. These cloned growth variants can grow in regular tissue culture media and maintain anchorage dependent growth and density inhibition property. Upon further irradiation with high-LET radiation, transformed foci were found. Experimental results from these studies suggest that multiexposure of radiation is required to induce neoplastic transformation of human epithelial cells. This multihits requirement may be due to high genomic stability of human cells. These growth variants can be useful model systems for space flight experiments to determine the carcinogenic effect of space radiation in human epithelial cells.

  18. Adaptable Diffraction Gratings With Wavefront Transformation

    NASA Technical Reports Server (NTRS)

    Iazikov, Dmitri; Mossberg, Thomas W.; Greiner, Christoph M.

    2010-01-01

    Diffraction gratings are optical components with regular patterns of grooves, which angularly disperse incoming light by wavelength. Traditional diffraction gratings have static planar, concave, or convex surfaces. However, if they could be made so that they can change the surface curvature at will, then they would be able to focus on particular segments, self-calibrate, or perform fine adjustments. This innovation creates a diffraction grating on a deformable surface. This surface could be bent at will, resulting in a dynamic wavefront transformation. This allows for self-calibration, compensation for aberrations, enhancing image resolution in a particular area, or performing multiple scans using different wavelengths. A dynamic grating gives scientists a new ability to explore wavefronts from a variety of viewpoints.

  19. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  20. Phase-shift detection in a Fourier-transform method for temperature sensing using a tapered fiber microknot resonator.

    PubMed

    Larocque, Hugo; Lu, Ping; Bao, Xiaoyi

    2016-04-01

    Phase-shift detection in a fast-Fourier-transform (FFT)-based spectrum analysis technique for temperature sensing using a tapered fiber microknot resonator is proposed and demonstrated. Multiple transmission peaks in the FFT spectrum of the device were identified as optical modes having completed different amounts of round trips within the ring structure. Temperature variation induced phase shifts for each set of peaks were characterized, and experimental results show that different peaks have distinct temperature sensitivities reaching values up to -0.542  rad/°C, which is about 10 times greater than that of a regular adiabatic taper Mach-Zehnder interferometer when using similar phase-tracking schemes.

  1. Exponential Decay of Dispersion-Managed Solitons for General Dispersion Profiles

    NASA Astrophysics Data System (ADS)

    Green, William R.; Hundertmark, Dirk

    2016-02-01

    We show that any weak solution of the dispersion management equation describing dispersion-managed solitons together with its Fourier transform decay exponentially. This strong regularity result extends a recent result of Erdoğan, Hundertmark, and Lee in two directions, to arbitrary non-negative average dispersion and, more importantly, to rather general dispersion profiles, which cover most, if not all, physically relevant cases.

  2. A Systolic VLSI Design of a Pipeline Reed-solomon Decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.

    1984-01-01

    A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.

  3. A VLSI design of a pipeline Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.

    1985-01-01

    A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.

  4. If You Are Like Me, I Think You Are More Authentic: An Analysis of the Interaction of Follower and Leader Gender

    ERIC Educational Resources Information Center

    Tibbs, Sandra; Green, Mark T.; Gergen, Esther; Montoya, Jared A.

    2016-01-01

    Within the empirical literature related to leadership, female leaders are regularly rated higher on dimensions such as being transformational and being effective. Some studies have found that gender plays a role in the follower-leader relationship, and this interaction can be assessed. An emerging model of leadership is authentic leadership. This…

  5. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  6. Visual based laser speckle pattern recognition method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Park, Kyeongtaek; Torbol, Marco

    2017-04-01

    This study performed the system identification of a target structure by analyzing the laser speckle pattern taken by a camera. The laser speckle pattern is generated by the diffuse reflection of the laser beam on a rough surface of the target structure. The camera, equipped with a red filter, records the scattered speckle particles of the laser light in real time and the raw speckle image of the pixel data is fed to the graphic processing unit (GPU) in the system. The algorithm for laser speckle contrast analysis (LASCA) computes: the laser speckle contrast images and the laser speckle flow images. The k-mean clustering algorithm is used to classify the pixels in each frame and the clusters' centroids, which function as virtual sensors, track the displacement between different frames in time domain. The fast Fourier transform (FFT) and the frequency domain decomposition (FDD) compute the modal properties of the structure: natural frequencies and damping ratios. This study takes advantage of the large scale computational capability of GPU. The algorithm is written in Compute Unifies Device Architecture (CUDA C) that allows the processing of speckle images in real time.

  7. Program Calculates Forces in Bolted Structural Joints

    NASA Technical Reports Server (NTRS)

    Buder, Daniel A.

    2005-01-01

    FORTRAN 77 computer program calculates forces in bolts in the joints of structures. This program is used in conjunction with the NASTRAN finite-element structural-analysis program. A mathematical model of a structure is first created by approximating its load-bearing members with representative finite elements, then NASTRAN calculates the forces and moments that each finite element contributes to grid points located throughout the structure. The user selects the finite elements that correspond to structural members that contribute loads to the joints of interest, and identifies the grid point nearest to each such joint. This program reads the pertinent NASTRAN output, combines the forces and moments from the contributing elements to determine the resultant force and moment acting at each proximate grid point, then transforms the forces and moments from these grid points to the centroids of the affected joints. Then the program uses these joint loads to obtain the axial and shear forces in the individual bolts. The program identifies which bolts bear the greatest axial and/or shear loads. The program also performs a fail-safe analysis in which the foregoing calculations are repeated for a sequence of cases in which each fastener, in turn, is assumed not to transmit an axial force.

  8. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution.

    PubMed

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.

  9. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution

    PubMed Central

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170

  10. (E)-1-(2,4-Di-nitro-phen-yl)-2-(3-eth-oxy-4-hy-droxy-benzyl-idene)hydrazine.

    PubMed

    Fun, Hoong-Kun; Chantrapromma, Suchada; Ruanwas, Pumsak; Kobkeatthawin, Thawanrat; Chidan Kumar, C S

    2014-01-01

    The mol-ecule of the title hydrazine derivative, C15H14N4O6, is essentially planar, the dihedral angle between the substituted benzene rings being 2.25 (9)°. The eth-oxy and hy-droxy groups are almost coplanar with their bound benzene ring [r.m.s. deviation = 0.0153 (2) Å for the ten non-H atoms]. Intra-molecular N-H⋯O and O-H⋯Oeth-oxy hydrogen bonds generate S(6) and S(5) ring motifs, respectively. In the crystal, mol-ecules are linked by O-H⋯Onitro hydrogen bonds into chains propagating in [010]. Weak aromatic π-π inter-actions, with centroid-centroid distances of 3.8192 (19) and 4.0491 (19) Å, are also observed.

  11. Crystal structure of quinolinium 2-carboxy-6-nitro-benzoate monohydrate.

    PubMed

    Mohana, J; Divya Bharathi, M; Ahila, G; Chakkaravarthi, G; Anbalagan, G

    2015-05-01

    In the anion of the title hydrated mol-ecular salt, C9H8N(+)·C8H4NO6 (-)·H2O, the protonated carboxyl and nitro groups makes dihedral angles of 27.56 (5) and 6.86 (8)°, respectively, with the attached benzene ring, whereas the deprotonated carb-oxy group is almost orthogonal to it with a dihedral angle of 80.21 (1)°. In the crystal, the components are linked by O-H⋯O and N-H⋯O hydrogen bonds, generating [001] chains. The packing is consolidated by weak C-H⋯N and C-H⋯O inter-actions as well as aromatic π-π stacking [centroid-to-centroid distances: 3.7023 (8) & 3.6590 (9)Å] inter-actions, resulting in a three-dimensional network.

  12. Dichloridobis(phenanthridine-κN)zinc(II).

    PubMed

    Khoshtarkib, Zeinab; Ebadi, Amin; Alizadeh, Robabeh; Ahmadi, Roya; Amani, Vahid

    2009-06-06

    In the mol-ecule of the title compound, [ZnCl(2)(C(13)H(9)N)(2)], the Zn(II) atom is four-coordinated in a distorted tetra-hedral configuration by two N atoms from two phenanthridine ligands and by two terminal Cl atoms. The dihedral angle between the planes of the phenanthridine ring systems is 69.92 (3)°. An intra-molecular C-H⋯Cl inter-action results in the formation of a planar five-membered ring, which is oriented at a dihedral angle of 8.32 (3)° with respect to the adjacent phenanthridine ring system. In the crystal structure, π-π contacts between the phenanthridine systems [centroid-centroid distances = 3.839 (2), 3.617 (1) and 3.682 (1) Å] may stabilize the structure. Two weak C-H⋯π inter-actions are also found.

  13. N-H.N hydrogen bonding in 4,6-diphenyl-2-pyrimidinylamine isolated from the plant Justicia secunda (Acanthaceae).

    PubMed

    Gallagher, John F; Goswami, Shyamaprosad; Chatterjee, Baidyanath; Jana, Subrata; Dutta, Kalyani

    2004-04-01

    The title compound, C(16)H(13)N(3), isolated from Justicia secunda (Acanthaceae), comprises two molecules (which differ slightly in conformation) in the asymmetric unit of space group P-1. Intermolecular N(amino)-H.N(pyrm) interactions (N(pyrm) is a pyrimidine ring N atom) involve only one of the two donor amino H atoms and pyrimidine N atoms per molecule, forming dimeric units via R(2)(2)(8) rings, with N.N distances of 3.058 (2) and 3.106 (3) A, and N-H.N angles of 172.7 (18) and 175.8 (17) degrees. The dimers are linked by C-H.pi(arene) contacts, with an H.centroid distance of 2.77 A and a C-H.centroid angle of 141 degrees.

  14. cis-Dichloridobis-(5,5'-dimethyl-2,2'-bipyridine)-manganese(II) 2.5-hydrate.

    PubMed

    Lopes, Lívia Batista; Corrêa, Charlane Cimini; Diniz, Renata

    2011-07-01

    The metal site in the title compound [MnCl(2)(C(12)H(12)N(2))(2)]·2.5H(2)O has a distorted octa-hedral geometry, coordinated by four N atoms of two 5,5'-dimethyl-2,2'-dipyridine ligands and two Cl atoms. Two and a half water molecules of hydration per complex unit are observed in the crystal structure. The compounds extend along the c axis with O-H⋯Cl, O-H⋯O, C-H⋯Cl and C-H⋯O hydrogen bonds and π-π inter-actions [centroid-centroid distance = 3.70 (2) Å] contributing substanti-ally to the crystal packing. The Mn and one of the water O atoms, the latter being half-occupied, are located on special positions, in this case a rotation axis of order 2.

  15. Centroid Position as a Function of Total Counts in a Windowed CMOS Image of a Point Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurtz, R E; Olivier, S; Riot, V

    2010-05-27

    We obtained 960,200 22-by-22-pixel windowed images of a pinhole spot using the Teledyne H2RG CMOS detector with un-cooled SIDECAR readout. We performed an analysis to determine the precision we might expect in the position error signals to a telescope's guider system. We find that, under non-optimized operating conditions, the error in the computed centroid is strongly dependent on the total counts in the point image only below a certain threshold, approximately 50,000 photo-electrons. The LSST guider camera specification currently requires a 0.04 arcsecond error at 10 Hertz. Given the performance measured here, this specification can be delivered with a singlemore » star at 14th to 18th magnitude, depending on the passband.« less

  16. Sensors with centroid-based common sensing scheme and their multiplexing

    NASA Astrophysics Data System (ADS)

    Berkcan, Ertugrul; Tiemann, Jerome J.; Brooksby, Glen W.

    1993-03-01

    The ability to multiplex sensors with different measurands but with a common sensing scheme is of importance in aircraft and aircraft engine applications; this unification of the sensors into a common interface has major implications for weight, cost, and reliability. A new class of sensors based on a common sensing scheme and their E/O Interface has been developed. The approach detects the location of the centroid of a beam of light; the set of fiber optic sensors with this sensing scheme include linear and rotary position, temperature, pressure, as well as duct Mach number. The sensing scheme provides immunity to intensity variations of the source or due to environmental effects on the fiber. A detector spatially multiplexed common electro-optic interface for the sensors has been demonstrated with a position and a temperature sensor.

  17. Crystal structure of N-{[3-bromo-1-(phenyl-sulfon-yl)-1H-indol-2-yl]meth-yl}benzene-sulfonamide.

    PubMed

    Umadevi, M; Raju, P; Yamuna, R; Mohanakrishnan, A K; Chakkaravarthi, G

    2015-10-01

    In the title compound, C21H17BrN2O4S2, the indole ring system subtends dihedral angles of 85.96 (13) and 9.62 (16)° with the planes of the N- and C-bonded benzene rings, respectively. The dihedral angles between the benzene rings is 88.05 (17)°. The mol-ecular conformation is stabilized by intra-molecular N-H⋯O and C-H⋯O hydrogen bonds and an aromatic π-π stacking [centroid-to-centroid distance = 3.503 (2) Å] inter-action. In the crystal, short Br⋯O [2.9888 (18) Å] contacts link the mol-ecules into [010] chains. The chains are cross-linked by weak C-H⋯π inter-actions, forming a three-dimensional network.

  18. Aqua-(3-fluoro-benzoato-κO)(3-fluoro-benzoato-κO,O')(1,10-phenanthroline-κN,N')cobalt(II).

    PubMed

    Wang, Xiao-Hui; Sun, Li-Mei

    2012-01-01

    In the title compound, [Co(C(7)H(4)FO(2))(2)(C(12)H(8)N(2))(H(2)O)], the Co(II) ion is coordinated by two O atoms from one 3-fluoro-benzoate (fb) ligand and one O atom from another fb ligand, two N atoms from the 1,10-phenanthroline ligand and a water mol-ecule in a distorted octa-hedral geometry. An intra-molecular O-H⋯O hydrogen bond occurs. Inter-molecular O-H⋯O hydrogen bonds link pairs of mol-ecules into centrosymmetric dimers. Weak inter-molecular C-H⋯O and C-H⋯F hydrogen bonds and π-π inter-actions between the aromatic rings [shortest centroid-centroid distance = 3.4962 (2) Å] further stabilize the crystal packing.

  19. 1-(3,3-Dichloro-all-yloxy)-4-methyl-2-nitro-benzene.

    PubMed

    Ren, Dong-Mei

    2012-06-01

    In the title compound, C(10)H(9)Cl(2)NO(3), the dihedral angle between the benzene ring and the plane of the nitro group is 39.1 (1)°, while that between the benzene ring and the plane through the three C and two Cl atoms of the dichloro-all-yloxy unit is 40.1 (1)°. In the crystal, C-H⋯O hydrogen bonds to the nitro groups form chains along the b axis. These chains are linked by inversion-related pairs of Cl⋯O inter-actions at a distance of 3.060 (3) Å, forming sheets approximately parallel to [-201] and generating R(2) (2)(18) rings. π-π contacts between benzene rings in adjacent sheets, with centroid-centroid distances of 3.671 (2) Å, stack mol-ecules along c.

  20. Acquisition and Initial Analysis of H+- and H--Beam Centroid Jitter at LANSCE

    NASA Astrophysics Data System (ADS)

    Gilpatrick, J. D.; Bitteker, L.; Gulley, M. S.; Kerstiens, D.; Oothoudt, M.; Pillai, C.; Power, J.; Shelley, F.

    2006-11-01

    During the 2005 Los Alamos Neutron Science Center (LANSCE) beam runs, beam current and centroid-jitter data were observed, acquired, analyzed, and documented for both the LANSCE H+ and H- beams. These data were acquired using three beam position monitors (BPMs) from the 100-MeV Isotope Production Facility (IPF) beam line and three BPMs from the Switchyard transport line at the end of the LANSCE 800-MeV linac. The two types of data acquired, intermacropulse and intramacropulse, were analyzed for statistical and frequency characteristics as well as various other correlations including comparing their phase-space like characteristics in a coordinate system of transverse angle versus transverse position. This paper will briefly describe the measurements required to acquire these data, the initial analysis of these jitter data, and some interesting dilemmas these data presented.

  1. Fine Guidance Sensing for Coronagraphic Observatories

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Alexander, James W.; Trauger, John T.; Moody, Dwight C.

    2011-01-01

    Three options have been developed for Fine Guidance Sensing (FGS) for coronagraphic observatories using a Fine Guidance Camera within a coronagraphic instrument. Coronagraphic observatories require very fine precision pointing in order to image faint objects at very small distances from a target star. The Fine Guidance Camera measures the direction to the target star. The first option, referred to as Spot, was to collect all of the light reflected from a coronagraph occulter onto a focal plane, producing an Airy-type point spread function (PSF). This would allow almost all of the starlight from the central star to be used for centroiding. The second approach, referred to as Punctured Disk, collects the light that bypasses a central obscuration, producing a PSF with a punctured central disk. The final approach, referred to as Lyot, collects light after passing through the occulter at the Lyot stop. The study includes generation of representative images for each option by the science team, followed by an engineering evaluation of a centroiding or a photometric algorithm for each option. After the alignment of the coronagraph to the fine guidance system, a "nulling" point on the FGS focal point is determined by calibration. This alignment is implemented by a fine alignment mechanism that is part of the fine guidance camera selection mirror. If the star images meet the modeling assumptions, and the star "centroid" can be driven to that nulling point, the contrast for the coronagraph will be maximized.

  2. Nano-JASMINE: cosmic radiation degradation of CCD performance and centroid detection

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yukiyasu; Shimura, Yuki; Niwa, Yoshito; Yano, Taihei; Gouda, Naoteru; Yamada, Yoshiyuki

    2012-09-01

    Nano-JASMINE (NJ) is a very small astrometry satellite project led by the National Astronomical Observatory of Japan. The satellite is ready for launch, and the launch is currently scheduled for late 2013 or early 2014. The satellite is equipped with a fully depleted CCD and is expected to perform astrometry observations for stars brighter than 9 mag in the zw-band (0.6 µm-1.0 µm). Distances of stars located within 100 pc of the Sun can be determined by using annual parallax measurements. The targeted accuracy for the position determination of stars brighter than 7.5 mag is 3 mas, which is equivalent to measuring the positions of stars with an accuracy of less than one five-hundredth of the CCD pixel size. The position measurements of stars are performed by centroiding the stellar images taken by the CCD that operates in the time and delay integration mode. The degradation of charge transfer performance due to cosmic radiation damage in orbit is proved experimentally. A method is then required to compensate for the effects of performance degradation. One of the most effective ways of achieving this is to simulate observed stellar outputs, including the effect of CCD degradation, and then formulate our centroiding algorithm and evaluate the accuracies of the measurements. We report here the planned procedure to simulate the outputs of the NJ observations. We also developed a CCD performance-measuring system and present preliminary results obtained using the system.

  3. (E)-4-Methyl-N′-[(4-oxo-4H-chromen-3-yl)methyl­idene]benzohydrazide

    PubMed Central

    Ishikawa, Yoshinobu; Watanabe, Kohzoh

    2014-01-01

    In the title chromone-tethered benzohydrazide derivative, C18H14N2O3, the 4H-chromen-4-one and the –CH=N–NH–CO– units are each essentially planar, with the largest deviations from thei planes being 0.052 (2) and 0.003 (2) Å, respectively. The dihedral angles between the 4H-chromen-4-one and the –CH=N–NH–CO– units, the 4H-chromen-4-one unit and the benzene ring of the 4-tolyl group, and the benzene ring of the 4-tolyl group and the –CH=N–NH–CO– unit are 8.09 (7), 9.94 (5) and 17.97 (8)°, respectively. In the crystal, the mol­ecules form two types of centrosymmetric dimers: one by N—H⋯O hydrogen bonds and the other by π–π stacking inter­actions between the 4H-chromen-4-one unit and the 4-tolyl group [centroid–centroid distance = 3.641 (5) Å]. These dimers form one-dimensional assemblies extending along the a-axis direction. Additional π–π stacking inter­actions between two 4H-chromen-4-one units [centroid–centroid distance = 3.591 (5) Å] and two 4-tolyl groups [centroid–centroid distance = 3.792 (5) Å] organize the mol­ecules into a three-dimensional network. PMID:24860370

  4. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    PubMed

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Longitudinal analysis of tibiofemoral cartilage contact area and position in ACL reconstructed patients.

    PubMed

    Chen, Ellison; Amano, Keiko; Pedoia, Valentina; Souza, Richard B; Ma, C Benjamin; Li, Xiaojuan

    2018-04-18

    Patients who have suffered ACL injury are more likely to develop early onset post-traumatic osteoarthritis despite reconstruction. The purpose of our study was to evaluate the longitudinal changes in the tibiofemoral cartilage contact area size and location after ACL injury and reconstruction. Thirty-one patients with isolated unilateral ACL injury were followed with T 2 weighted Fast Spin Echo, T 1ρ and T 2 MRI at baseline prior to reconstruction, and 6 months, 1 year, and 2 years after surgery. Areas were delineated in FSE images with an in-house Matlab program using a spline-based semi-automated segmentation algorithm. Tibiofemoral contact area and centroid position along the anterior-posterior axis were calculated along with T 1ρ and T 2 relaxation times on both the injured and non-injured knees. At baseline, the injured knees had significantly smaller and more posteriorly positioned contact areas on the medial tibial surface compared to corresponding healthy knees. These differences persisted 6 months after reconstruction. Moreover, subjects with more anterior medial centroid positions at 6 months had elevated T 1ρ and T 2 measures in the posterior medial tibial plateau at 1 year. Changes in contact area and centroid position after ACL injury and reconstruction may characterize some of the mechanical factors contributing to post-traumatic osteoarthritis. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  6. Anatomy guided automated SPECT renal seed point estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar; Kumar, Sailendra

    2010-04-01

    Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.

  7. An Improvement To The k-Nearest Neighbor Classifier For ECG Database

    NASA Astrophysics Data System (ADS)

    Jaafar, Haryati; Hidayah Ramli, Nur; Nasir, Aimi Salihah Abdul

    2018-03-01

    The k nearest neighbor (kNN) is a non-parametric classifier and has been widely used for pattern classification. However, in practice, the performance of kNN often tends to fail due to the lack of information on how the samples are distributed among them. Moreover, kNN is no longer optimal when the training samples are limited. Another problem observed in kNN is regarding the weighting issues in assigning the class label before classification. Thus, to solve these limitations, a new classifier called Mahalanobis fuzzy k-nearest centroid neighbor (MFkNCN) is proposed in this study. Here, a Mahalanobis distance is applied to avoid the imbalance of samples distribition. Then, a surrounding rule is employed to obtain the nearest centroid neighbor based on the distributions of training samples and its distance to the query point. Consequently, the fuzzy membership function is employed to assign the query point to the class label which is frequently represented by the nearest centroid neighbor Experimental studies from electrocardiogram (ECG) signal is applied in this study. The classification performances are evaluated in two experimental steps i.e. different values of k and different sizes of feature dimensions. Subsequently, a comparative study of kNN, kNCN, FkNN and MFkCNN classifier is conducted to evaluate the performances of the proposed classifier. The results show that the performance of MFkNCN consistently exceeds the kNN, kNCN and FkNN with the best classification rates of 96.5%.

  8. Inverse electrocardiographic transformations: dependence on the number of epicardial regions and body surface data points.

    PubMed

    Johnston, P R; Walker, S J; Hyttinen, J A; Kilpatrick, D

    1994-04-01

    The inverse problem of electrocardiography, the computation of epicardial potentials from body surface potentials, is influenced by the desired resolution on the epicardium, the number of recording points on the body surface, and the method of limiting the inversion process. To examine the role of these variables in the computation of the inverse transform, Tikhonov's zero-order regularization and singular value decomposition (SVD) have been used to invert the forward transfer matrix. The inverses have been compared in a data-independent manner using the resolution and the noise amplification as endpoints. Sets of 32, 50, 192, and 384 leads were chosen as sets of body surface data, and 26, 50, 74, and 98 regions were chosen to represent the epicardium. The resolution and noise were both improved by using a greater number of electrodes on the body surface. When 60% of the singular values are retained, the results show a trade-off between noise and resolution, with typical maximal epicardial noise levels of less than 0.5% of maximum epicardial potentials for 26 epicardial regions, 2.5% for 50 epicardial regions, 7.5% for 74 epicardial regions, and 50% for 98 epicardial regions. As the number of epicardial regions is increased, the regularization technique effectively fixes the noise amplification but markedly decreases the resolution, whereas SVD results in an increase in noise and a moderate decrease in resolution. Overall the regularization technique performs slightly better than SVD in the noise-resolution relationship. There is a region at the posterior of the heart that was poorly resolved regardless of the number of regions chosen. The variance of the resolution was such as to suggest the use of variable-size epicardial regions based on the resolution.

  9. An automated calibration method for non-see-through head mounted displays.

    PubMed

    Gilson, Stuart J; Fitzgibbon, Andrew W; Glennerster, Andrew

    2011-08-15

    Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Improved Band-to-Band Registration Characterization for VIIRS Reflective Solar Bands Based on Lunar Observations

    NASA Technical Reports Server (NTRS)

    Wang, Zhipeng; Xiong, Xiaoxiong; Li, Yonghong

    2015-01-01

    Spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) instrumentaboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite are spatially co-registered.The accuracy of the band-to-band registration (BBR) is one of the key spatial parameters that must becharacterized. Unlike its predecessor, the Moderate Resolution Imaging Spectroradiometer (MODIS), VIIRS has no on-board calibrator specifically designed to perform on-orbit BBR characterization.To circumvent this problem, a BBR characterization method for VIIRS reflective solar bands (RSB) based on regularly-acquired lunar images has been developed. While its results can satisfactorily demonstrate that the long-term stability of the BBR is well within +/- 0.1 moderate resolution bandpixels, undesired seasonal oscillations have been observed in the trending. The oscillations are most obvious between the visiblenear-infrared bands and short-middle wave infrared bands. This paper investigates the oscillations and identifies their cause as the band spectral dependence of the centroid position and the seasonal rotation of the lunar images over calibration events. Accordingly, an improved algorithm is proposed to quantify the rotation and compensate for its impact. After the correction, the seasonal oscillation in the resulting BBR is reduced from up to 0.05 moderate resolution band pixels to around 0.01 moderate resolution band pixels. After removing this spurious seasonal oscillation, the BBR, as well as its long-term drift are well determined.

  11. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  12. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  13. Airborne gamma-ray spectra processing: Extracting photopeaks.

    PubMed

    Druker, Eugene

    2018-07-01

    The acquisition of information from the airborne gamma-ray spectra is based on the ability to evaluate photopeak areas in regular spectra from natural and other sources. In airborne gamma-ray spectrometry, extraction of photopeaks of radionuclides from regular one-second spectra is a complex problem. In the region of higher energies, difficulties are associated with low signal level, i.e. low count rates, whereas at lower energies difficulties are associated with high noises due to a high signal level. In this article, a new procedure is proposed for processing the measured spectra up to and including the extraction of evident photopeaks. The procedure consists of reducing the noise in the energy channels along the flight lines, transforming the spectra into the spectra of equal resolution, removing the background from each spectrum, sharpening the details, and transforming the spectra back to the original energy scale. The resulting spectra are better suited for examining and using the photopeaks. No assumptions are required regarding the number, locations, and magnitudes of photopeaks. The procedure does not generate negative photopeaks. The resolution of the spectrometer is used for the purpose. The proposed methodology, apparently, will contribute also to study environmental problems, soil characterization, and other near-surface geophysical methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  15. The relationship between oceanic transform fault segmentation, seismicity, and thermal structure

    NASA Astrophysics Data System (ADS)

    Wolfson-Schwehr, Monica

    Mid-ocean ridge transform faults (RTFs) are typically viewed as geometrically simple, with fault lengths readily constrained by the ridge-transform intersections. This relative simplicity, combined with well-constrained slip rates, make them an ideal environment for studying strike-slip earthquake behavior. As the resolution of available bathymetric data over oceanic transform faults continues to improve, however, it is being revealed that the geometry and structure of these faults can be complex, including such features as intra-transform pull-apart basins, intra-transform spreading centers, and cross-transform ridges. To better determine the resolution of structural complexity on RTFs, as well as the prevalence of RTF segmentation, fault structure is delineated on a global scale. Segmentation breaks the fault system up into a series of subparallel fault strands separated by an extensional basin, intra-transform spreading center, or fault step. RTF segmentation occurs across the full range of spreading rates, from faults on the ultraslow portion of the Southwest Indian Ridge to faults on the ultrafast portion of the East Pacific Rise (EPR). It is most prevalent along the EPR, which hosts the fastest spreading rates in the world and has undergone multiple changes in relative plate motion over the last couple of million years. Earthquakes on RTFs are known to be small, to scale with the area above the 600°C isotherm, and to exhibit some of the most predictable behaviors in seismology. In order to determine whether segmentation affects the global RTF scaling relations, the scalings are recomputed using an updated seismic catalog and fault database in which RTF systems are broken up according to their degree of segmentation (as delineated from available bathymetric datasets). No statistically significant differences between the new computed scaling relations and the current scaling relations were found, though a few faults were identified as outliers. Finite element analysis is used to model 3-D RTF fault geometry assuming a viscoplastic rheology in order to determine how segmentation affects the underlying thermal structure of the fault. In the models, fault segment length, length and location along fault of the intra-transform spreading center, and slip rate are varied. A new scaling relation is developed for the critical fault offset length (OC) that significantly reduces the thermal area of adjacent fault segments, such that adjacent segments are fully decoupled at ~4 OC . On moderate to fast slipping RTFs, offsets ≥ 5 km are sufficient to significantly reduce the thermal influence between two adjacent transform fault segments. The relationship between fault structure and seismic behavior was directly addressed on the Discovery transform fault, located at 4°S on the East Pacific Rise. One year of microseismicity recorded on an OBS array, and 24 years of Mw ≥ 5.4 earthquakes obtained from the Global Centroid Moment Tensor catalog, were correlated with surface fault structure delineated from high-resolution multibeam bathymetry. Each of the 15 Mw ≥ 5.4 earthquakes was relocated into one of five distinct repeating rupture patches, while microseismicity was found to be reduced within these patches. While the endpoints of these patches appeared to correlate with structural features on the western segment of Discovery, small step-overs in the primary fault trace were not observed at patch boundaries. This indicates that physical segmentation of the fault is not the primary control on the size and location of large earthquakes on Discovery, and that along-strike heterogeneity in fault zone properties must play an important role.

  16. Two-Point Resistance of a Non-Regular Cylindrical Network with a Zero Resistor Axis and Two Arbitrary Boundaries

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Zhong

    2017-03-01

    We study a problem of two-point resistance in a non-regular m × n cylindrical network with a zero resistor axis and two arbitrary boundaries by means of the Recursion-Transform method. This is a new problem never solved before, the Green’s function technique and the Laplacian matrix approach are invalid in this case. A disordered network with arbitrary boundaries is a basic model in many physical systems or real world systems, however looking for the exact calculation of the resistance of a binary resistor network is important but difficult in the case of the arbitrary boundaries, the boundary is like a wall or trap which affects the behavior of finite network. In this paper we obtain a general resistance formula of a non-regular m × n cylindrical network, which is composed of a single summation. Further, the current distribution is given explicitly as a byproduct of the method. As applications, several interesting results are derived by making special cases from the general formula. Supported by the Natural Science Foundation of Jiangsu Province under Grant No. BK20161278

  17. A hybrid inventory management system respondingto regular demand and surge demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammad S. Roni; Mingzhou Jin; Sandra D. Eksioglu

    2014-06-01

    This paper proposes a hybrid policy for a stochastic inventory system facing regular demand and surge demand. The combination of two different demand patterns can be observed in many areas, such as healthcare inventory and humanitarian supply chain management. The surge demand has a lower arrival rate but higher demand volume per arrival. The solution approach proposed in this paper incorporates the level crossing method and mixed integer programming technique to optimize the hybrid inventory policy with both regular orders and emergency orders. The level crossing method is applied to obtain the equilibrium distributions of inventory levels under a givenmore » policy. The model is further transformed into a mixed integer program to identify an optimal hybrid policy. A sensitivity analysis is conducted to investigate the impact of parameters on the optimal inventory policy and minimum cost. Numerical results clearly show the benefit of using the proposed hybrid inventory model. The model and solution approach could help healthcare providers or humanitarian logistics providers in managing their emergency supplies in responding to surge demands.« less

  18. Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.

    PubMed

    Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin

    2013-09-01

    Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.

  19. FAST TRACK COMMUNICATION: SUSY transformations with complex factorization constants: application to spectral singularities

    NASA Astrophysics Data System (ADS)

    Samsonov, Boris F.

    2010-10-01

    Supersymmetric (SUSY) transformation operators with complex factorization constants are analyzed as operators acting in the Hilbert space of functions square integrable on the positive semiaxis. The obtained results are applied to Hamiltonians possessing spectral singularities which are non-Hermitian SUSY partners of self-adjoint operators. A new regularization procedure for the resolution of the identity operator in terms of a continuous biorthonormal set of the non-Hermitian Hamiltonian eigenfunctions is proposed. It is also argued that if the binorm of continuous spectrum eigenfunctions is interpreted in the same way as the norm of similar functions in the usual Hermitian case, then one can state that the function corresponding to a spectral singularity has zero binorm.

  20. Gauge theory for finite-dimensional dynamical systems.

    PubMed

    Gurfil, Pini

    2007-06-01

    Gauge theory is a well-established concept in quantum physics, electrodynamics, and cosmology. This concept has recently proliferated into new areas, such as mechanics and astrodynamics. In this paper, we discuss a few applications of gauge theory in finite-dimensional dynamical systems. We focus on the concept of rescriptive gauge symmetry, which is, in essence, rescaling of an independent variable. We show that a simple gauge transformation of multiple harmonic oscillators driven by chaotic processes can render an apparently "disordered" flow into a regular dynamical process, and that there exists a strong connection between gauge transformations and reduction theory of ordinary differential equations. Throughout the discussion, we demonstrate the main ideas by considering examples from diverse fields, including quantum mechanics, chemistry, rigid-body dynamics, and information theory.

  1. Time resolving beam position measurement and analysis of beam unstable movement in PSR

    NASA Astrophysics Data System (ADS)

    Aleksandrov, A. V.

    2000-11-01

    Precise measurement of beam centroid movement is very important for understanding the fast transverse instability in the Los Alamos Proton Storage Ring (PSR). Proton bunch in the PSR is long thus different parts of the bunch can have different betatron phase and move differently therefore time resolving position measurement is needed. Wide band strip line BPM can be adequate if proper processing algorithm is used. In this work we present the results of the analysis of unstable transverse beam motion using time resolving processing algorithm. Suggested algorithm allows to calculate transverse position of different parts of the beam on each turn, then beam centroid movement on successive turns can be developed in series of plane travelling waves in the beam frame of reference thus providing important information on instability development. Some general features of fast transverse instability, unknown before, are discovered.

  2. Crystal structure of quinolinium 2-carboxy-6-nitro­benzoate monohydrate

    PubMed Central

    Mohana, J.; Divya Bharathi, M.; Ahila, G.; Chakkaravarthi, G.; Anbalagan, G.

    2015-01-01

    In the anion of the title hydrated mol­ecular salt, C9H8N+·C8H4NO6 −·H2O, the protonated carboxyl and nitro groups makes dihedral angles of 27.56 (5) and 6.86 (8)°, respectively, with the attached benzene ring, whereas the deprotonated carb­oxy group is almost orthogonal to it with a dihedral angle of 80.21 (1)°. In the crystal, the components are linked by O—H⋯O and N—H⋯O hydrogen bonds, generating [001] chains. The packing is consolidated by weak C—H⋯N and C—H⋯O inter­actions as well as aromatic π–π stacking [centroid-to-centroid distances: 3.7023 (8) & 3.6590 (9)Å] inter­actions, resulting in a three-dimensional network. PMID:25995899

  3. K-means-clustering-based fiber nonlinearity equalization techniques for 64-QAM coherent optical communication system.

    PubMed

    Zhang, Junfeng; Chen, Wei; Gao, Mingyi; Shen, Gangxiang

    2017-10-30

    In this work, we proposed two k-means-clustering-based algorithms to mitigate the fiber nonlinearity for 64-quadrature amplitude modulation (64-QAM) signal, the training-sequence assisted k-means algorithm and the blind k-means algorithm. We experimentally demonstrated the proposed k-means-clustering-based fiber nonlinearity mitigation techniques in 75-Gb/s 64-QAM coherent optical communication system. The proposed algorithms have reduced clustering complexity and low data redundancy and they are able to quickly find appropriate initial centroids and select correctly the centroids of the clusters to obtain the global optimal solutions for large k value. We measured the bit-error-ratio (BER) performance of 64-QAM signal with different launched powers into the 50-km single mode fiber and the proposed techniques can greatly mitigate the signal impairments caused by the amplified spontaneous emission noise and the fiber Kerr nonlinearity and improve the BER performance.

  4. Crystal structure of (2Z,5Z)-3-(4-meth­oxy­phen­yl)-2-[(4-meth­oxy­phenyl)­imino]-5-[(E)-3-(2-nitro­phen­yl)allyl­idene]-1,3-thia­zolidin-4-one

    PubMed Central

    Rahmani, Rachida; Djafri, Ahmed; Daran, Jean-Claude; Djafri, Ayada; Chouaih, Abdelkader; Hamzaoui, Fodil

    2016-01-01

    In the title compound, C26H21N3O5S, the thia­zole ring is nearly planar with a maximum deviation of 0.017 (2) Å, and is twisted with respect to the three benzene rings, making dihedral angles of 25.52 (12), 85.77 (12) and 81.85 (13)°. In the crystal, weak C—H⋯O hydrogen bonds and C—H⋯π inter­actions link the mol­ecules into a three-dimensional supra­molecular architecture. Aromatic π–π stacking is also observed between the parallel nitro­benzene rings of neighbouring mol­ecules, the centroid-to-centroid distance being 3.5872 (15) Å. PMID:26958377

  5. Centroid Detector Assembly for the AXAF-I Alignment Test System

    NASA Technical Reports Server (NTRS)

    Glenn, Paul

    1995-01-01

    The High Resolution Mirror Assembly (HRMA) of the Advanced X-ray Astrophysics Facility (imaging) (AXAF-I) consists of four nested paraboloids and four nested hyperboloids, all of meter-class size, and all of which are to be assembled and aligned in a special 15 meter tower at Eastman Kodak Company in Rochester, NY. The goals of the alignment are (1) to make the images of the four telescopes coincident; (2) to remove coma from each image individually; and (3) to control and determine the final position of the composite focus. This will be accomplished by the HRMA Aligment Test System (HATS) which is essentially a scanning Hartmann test system. The scanning laser source and the focal plane of the HATS are part of the Centroid Detector Assembly (CDA) which also includes processing electronics and software. In this paper we discuss the design and the measured performance of the CDA.

  6. cis-Dichloridobis­(5,5′-dimethyl-2,2′-bipyridine)­manganese(II) 2.5-hydrate

    PubMed Central

    Lopes, Lívia Batista; Corrêa, Charlane Cimini; Diniz, Renata

    2011-01-01

    The metal site in the title compound [MnCl2(C12H12N2)2]·2.5H2O has a distorted octa­hedral geometry, coordinated by four N atoms of two 5,5′-dimethyl-2,2′-dipyridine ligands and two Cl atoms. Two and a half water molecules of hydration per complex unit are observed in the crystal structure. The compounds extend along the c axis with O—H⋯Cl, O—H⋯O, C—H⋯Cl and C—H⋯O hydrogen bonds and π–π inter­actions [centroid-centroid distance = 3.70 (2) Å] contributing substanti­ally to the crystal packing. The Mn and one of the water O atoms, the latter being half-occupied, are located on special positions, in this case a rotation axis of order 2. PMID:21836893

  7. Computing the apparent centroid of radar targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.E.

    1996-12-31

    A high-frequency multibounce radar scattering code was used as a simulation platform for demonstrating an algorithm to compute the ARC of specific radar targets. To illustrate this simulation process, several targets models were used. Simulation results for a sphere model were used to determine the errors of approximation associated with the simulation; verifying the process. The severity of glint induced tracking errors was also illustrated using a model of an F-15 aircraft. It was shown, in a deterministic manner, that the ARC of a target can fall well outside its physical extent. Finally, the apparent radar centroid simulation based onmore » a ray casting procedure is well suited for use on most massively parallel computing platforms and could lead to the development of a near real-time radar tracking simulation for applications such as endgame fuzing, survivability, and vulnerability analyses using specific radar targets and fuze algorithms.« less

  8. The contribution of timbre attributes to musical tension.

    PubMed

    Farbood, Morwaread M; Price, Khen C

    2017-01-01

    Timbre is an auditory feature that has received relatively little attention in empirical work examining musical tension. In order to address this gap, an experiment was conducted to explore the contribution of several specific timbre attributes-inharmonicity, roughness, spectral centroid, spectral deviation, and spectral flatness-to the perception of tension. Listeners compared pairs of sounds representing low and high degrees of each attribute and indicated which sound was more tense. Although the response profiles showed that the high states corresponded with increased tension for all attributes, further analysis revealed that some attributes were strongly correlated with others. When qualitative factors, attribute correlations, and listener responses were all taken into account, there was fairly strong evidence that higher degrees of roughness, inharmonicity, and spectral flatness elicited higher tension. On the other hand, evidence that higher spectral centroid and spectral deviation corresponded to increases in tension was ambiguous.

  9. Leucosome distribution in migmatitic paragneisses and orthogneisses: A record of self-organized melt migration and entrapment in a heterogeneous partially-molten crust

    NASA Astrophysics Data System (ADS)

    Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  10. Atmospheric turbulence temperature on the laser wavefront properties

    NASA Astrophysics Data System (ADS)

    Contreras López, J. C.; Ballesteros Díaz, A.; Tíjaro Rojas, O. J.; Torres Moreno, Y.

    2017-06-01

    Temperature is a physical magnitude that if is higher, the refractive index presents more important random fluctuations, which produce a greater distortion in the wavefront and thus a displacement in its centroid. To observe the effect produced by the turbulent medium strongly influenced by temperature on propagation laser beam, we experimented with two variable and controllable temperature systems designed as optical turbulence generators (OTG): a Turbulator and a Parallelepiped glass container. The experimental setup use three CMOS cameras and four temperature sensors spatially distributed to acquire synchronously information of the laser beam wavefront and turbulence temperature, respectively. The acquired information was analyzed with MATLAB® software tool, that it allows to compute the position, in terms of the evolution time, of the laser beam center of mass and their deviations produced by different turbulent conditions generated inside the two manufactured systems. The results were reflected in the statistical analysis of the centroid shifting.

  11. Mitigation of time-varying distortions in Nyquist-WDM systems using machine learning

    NASA Astrophysics Data System (ADS)

    Granada Torres, Jhon J.; Varughese, Siddharth; Thomas, Varghese A.; Chiuchiarelli, Andrea; Ralph, Stephen E.; Cárdenas Soto, Ana M.; Guerrero González, Neil

    2017-11-01

    We propose a machine learning-based nonsymmetrical demodulation technique relying on clustering to mitigate time-varying distortions derived from several impairments such as IQ imbalance, bias drift, phase noise and interchannel interference. Experimental results show that those impairments cause centroid movements in the received constellations seen in time-windows of 10k symbols in controlled scenarios. In our demodulation technique, the k-means algorithm iteratively identifies the cluster centroids in the constellation of the received symbols in short time windows by means of the optimization of decision thresholds for a minimum BER. We experimentally verified the effectiveness of this computationally efficient technique in multicarrier 16QAM Nyquist-WDM systems over 270 km links. Our nonsymmetrical demodulation technique outperforms the conventional QAM demodulation technique, reducing the OSNR requirement up to ∼0.8 dB at a BER of 1 × 10-2 for signals affected by interchannel interference.

  12. Towards high-resolution neutron imaging on IMAT

    NASA Astrophysics Data System (ADS)

    Minniti, T.; Tremsin, A. S.; Vitucci, G.; Kockelmann, W.

    2018-01-01

    IMAT is a new cold-neutron imaging facility at the neutron spallation source ISIS at the Rutherford Appleton Laboratory, U.K.. The ISIS pulsed source enables energy-selective and energy-resolved neutron imaging via time-of-flight (TOF) techniques, which are available in addition to the white-beam neutron radiography and tomography options. A spatial resolution of about 50 μm for white-beam neutron radiography was achieved early in the IMAT commissioning phase. In this work we have made the first steps towards achieving higher spatial resolution. A white-beam radiography with 18 μm spatial resolution was achieved in this experiment. This result was possible by using the event counting neutron pixel detector based on micro-channel plates (MCP) coupled with a Timepix readout chip with 55 μm sized pixels, and by employing an event centroiding technique. The prospects for energy-selective neutron radiography for this centroiding mode are discussed.

  13. Centroid measurement error of CMOS detector in the presence of detector noise for inter-satellite optical communications

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhou, Shihong; Ma, Jing; Tan, Liying; Shen, Tao

    2013-08-01

    CMOS is a good candidate tracking detector for satellite optical communications systems with outstanding feature of sub-window for the development of APS (Active Pixel Sensor) technology. For inter-satellite optical communications it is critical to estimate the direction of incident laser beam precisely by measuring the centroid position of incident beam spot. The presence of detector noise results in measurement error, which degrades the tracking performance of systems. In this research, the measurement error of CMOS is derived taking consideration of detector noise. It is shown that the measurement error depends on pixel noise, size of the tracking sub-window (pixels number), intensity of incident laser beam, relative size of beam spot. The influences of these factors are analyzed by numerical simulation. We hope the results obtained in this research will be helpful in the design of CMOS detector satellite optical communications systems.

  14. 2-(4,5-Dihydro-1H-imidazol-2-yl)­pyridine

    PubMed Central

    Kia, Reza; Fun, Hoong-Kun; Kargar, Hadi

    2009-01-01

    In the mol­ecule of the title compound, C8H9N3, a new imidazoline derivative, the six- and five-membered rings are slightly twisted away from each other, forming a dihedral angle of 7.96 (15)°. In the crystal structure, neighbouring mol­ecules are linked together by inter­molecular N—H⋯N hydrogen bonds into extended one-dimensional chains along the a axis. The pyridine N atom is in close proximity to a carbon-bound H atom of the imidazoline ring, with an H⋯N distance of 2.70 Å, which is slightly shorter than the sum of the van der Waals radii of these atoms (2.75 Å). The crystal structure is further stabilized by inter­molecular C—H⋯π and π–π inter­actions (centroid-to-centroid distance 3.853 Å). PMID:21582505

  15. Orphenadrinium picrate picric acid.

    PubMed

    Fun, Hoong-Kun; Hemamalini, Madhukar; Siddaraju, B P; Yathirajan, H S; Narayana, B

    2010-02-24

    The asymmetric unit of the title compound N,N-dimethyl-2-[(2-methyl-phen-yl)phenyl-meth-oxy]ethanaminium picrate picric acid, C(18)H(24)NO(+)·C(6)H(2)N(3)O(7) (-)·C(6)H(3)N(3)O(7), contains one orphenadrinium cation, one picrate anion and one picric acid mol-ecule. In the orphenadrine cation, the two aromatic rings form a dihedral angle of 70.30 (7)°. There is an intra-molecular O-H⋯O hydrogen bond in the picric acid mol-ecule, which generates an S(6) ring motif. In the crystal structure, the orphenadrine cations, picrate anions and picric acid mol-ecules are connected by strong inter-molecular N-H⋯O hydrogen bonds, π⋯π inter-actions between the benzene rings of cations and anions [centroid-centroid distance = 3.5603 (9) Å] and weak C-H⋯O hydrogen bonds, forming a three-dimensional network.

  16. Simplex-centroid mixture formulation for optimised composting of kitchen waste.

    PubMed

    Abdullah, N; Chin, N L

    2010-11-01

    Composting is a good recycling method to fully utilise all the organic wastes present in kitchen waste due to its high nutritious matter within the waste. In this present study, the optimised mixture proportions of kitchen waste containing vegetable scraps (V), fish processing waste (F) and newspaper (N) or onion peels (O) were determined by applying the simplex-centroid mixture design method to achieve the desired initial moisture content and carbon-to-nitrogen (CN) ratio for effective composting process. The best mixture was at 48.5% V, 17.7% F and 33.7% N for blends with newspaper while for blends with onion peels, the mixture proportion was 44.0% V, 19.7% F and 36.2% O. The predicted responses from these mixture proportions fall in the acceptable limits of moisture content of 50% to 65% and CN ratio of 20-40 and were also validated experimentally. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. RADIO ASTROMETRY OF THE CLOSE ACTIVE BINARY HR 5110

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbuhl, E.; Mutel, R. L.; Lynch, C.

    2015-09-20

    The close active binary HR 5110 was observed at six epochs over 26 days using a global very long baseline interferometry array at 15.4 GHz. We used phase referencing to determine the position of the radio centroid at each epoch with an uncertainty significantly smaller than the component separation. After correcting for proper motion and parallax, we find that the centroid locations of all six epochs have barycenter separations consistent with an emission source located on the KIV secondary, and not in an interaction region between the stars or on the F primary. We used a homogeneous power-law gyrosynchrotron emissionmore » model to reproduce the observed flux densities and fractional circular polarization. The resulting ranges of mean magnetic field strength and relativistic electron densities are of the order of 10 G and 10{sup 5} cm{sup −3}, respectively, in the source region.« less

  18. Collective circular motion in synchronized and balanced formations with second-order rotational dynamics

    NASA Astrophysics Data System (ADS)

    Jain, Anoop; Ghose, Debasish

    2018-01-01

    This paper considers collective circular motion of multi-agent systems in which all the agents are required to traverse different circles or a common circle at a prescribed angular velocity. It is required to achieve these collective motions with the heading angles of the agents synchronized or balanced. In synchronization, the agents and their centroid have a common velocity direction, while in balancing, the movement of agents causes the location of the centroid to become stationary. The agents are initially considered to move at unit speed around individual circles at different angular velocities. It is assumed that the agents are subjected to limited communication constraints, and exchange relative information according to a time-invariant undirected graph. We present suitable feedback control laws for each of these motion coordination tasks by considering a second-order rotational dynamics of the agent. Simulations are given to illustrate the theoretical findings.

  19. voomDDA: discovery of diagnostic biomarkers and classification of RNA-seq data.

    PubMed

    Zararsiz, Gokmen; Goksuluk, Dincer; Klaus, Bernd; Korkmaz, Selcuk; Eldem, Vahap; Karabulut, Erdem; Ozturk, Ahmet

    2017-01-01

    RNA-Seq is a recent and efficient technique that uses the capabilities of next-generation sequencing technology for characterizing and quantifying transcriptomes. One important task using gene-expression data is to identify a small subset of genes that can be used to build diagnostic classifiers particularly for cancer diseases. Microarray based classifiers are not directly applicable to RNA-Seq data due to its discrete nature. Overdispersion is another problem that requires careful modeling of mean and variance relationship of the RNA-Seq data. In this study, we present voomDDA classifiers: variance modeling at the observational level (voom) extensions of the nearest shrunken centroids (NSC) and the diagonal discriminant classifiers. VoomNSC is one of these classifiers and brings voom and NSC approaches together for the purpose of gene-expression based classification. For this purpose, we propose weighted statistics and put these weighted statistics into the NSC algorithm. The VoomNSC is a sparse classifier that models the mean-variance relationship using the voom method and incorporates voom's precision weights into the NSC classifier via weighted statistics. A comprehensive simulation study was designed and four real datasets are used for performance assessment. The overall results indicate that voomNSC performs as the sparsest classifier. It also provides the most accurate results together with power-transformed Poisson linear discriminant analysis, rlog transformed support vector machines and random forests algorithms. In addition to prediction purposes, the voomNSC classifier can be used to identify the potential diagnostic biomarkers for a condition of interest. Through this work, statistical learning methods proposed for microarrays can be reused for RNA-Seq data. An interactive web application is freely available at http://www.biosoft.hacettepe.edu.tr/voomDDA/.

  20. Back-arc extension in the Andaman Sea: Tectonic and magmatic processes imaged by high-precision teleseismic double-difference earthquake relocation

    NASA Astrophysics Data System (ADS)

    Diehl, T.; Waldhauser, F.; Cochran, J. R.; Kamesh Raju, K. A.; Seeber, L.; Schaff, D.; Engdahl, E. R.

    2013-05-01

    geometry, kinematics, and mode of back-arc extension along the Andaman Sea plate boundary are refined using a new set of significantly improved hypocenters, global centroid moment tensor (CMT) solutions, and high-resolution bathymetry. By applying cross-correlation and double-difference (DD) algorithms to regional and teleseismic waveforms and arrival times from International Seismological Centre and National Earthquake Information Center bulletins (1964-2009), we resolve the fine-scale structure and spatiotemporal behavior of active faults in the Andaman Sea. The new data reveal that back-arc extension is primarily accommodated at the Andaman Back-Arc Spreading Center (ABSC) at 10°, which hosted three major earthquake swarms in 1984, 2006, and 2009. Short-term spreading rates estimated from extensional moment tensors account for less than 10% of the long-term 3.0-3.8 cm/yr spreading rate, indicating that spreading by intrusion and the formation of new crust make up for the difference. A spatiotemporal analysis of the swarms and Coulomb-stress modeling show that dike intrusions are the primary driver for brittle failure in the ABSC. While spreading direction is close to ridge normal, it is oblique to the adjacent transforms. The resulting component of E-W extension across the transforms is expressed by deep basins on either side of the rift and a change to extensional faulting along the West Andaman fault system after the Mw = 9.2 Sumatra-Andaman earthquake of 2004. A possible skew in slip vectors of earthquakes in the eastern part of the ABSC indicates an en-echelon arrangement of extensional structures, suggesting that the present segment geometry is not in equilibrium with current plate-motion demands, and thus the ridge experiences ongoing re-adjustment.

Top