Sample records for haar werk vrouwenarbeid

  1. Discrete Haar transform and protein structure.

    PubMed

    Morosetti, S

    1997-12-01

    The discrete Haar transform of the sequence of the backbone dihedral angles (phi and psi) was performed over a set of X-ray protein structures of high resolution from the Brookhaven Protein Data Bank. Afterwards, the new dihedral angles were calculated by the inverse transform, using a growing number of Haar functions, from the lower to the higher degree. New structures were obtained using these dihedral angles, with standard values for bond lengths and angles, and with omega = 0 degree. The reconstructed structures were compared with the experimental ones, and analyzed by visual inspection and statistical analysis. When half of the Haar coefficients were used, all the reconstructed structures were not yet collapsed to a tertiary folding, but they showed yet realized most of the secondary motifs. These results indicate a substantial separation of structural information in the space of Haar transform, with the secondary structural information mainly present in the Haar coefficients of lower degrees, and the tertiary one present in the higher degree coefficients. Because of this separation, the representation of the folded structures in the space of Haar transform seems a promising candidate to encompass the problem of premature convergence in genetic algorithms.

  2. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

    2008-07-01

    Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

  3. Obscenity Detection Using Haar-Like Features and Gentle Adaboost Classifier

    PubMed Central

    Min, Yang; Zhu, Dingju

    2014-01-01

    Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB) haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier. PMID:25003153

  4. Obscenity detection using haar-like features and Gentle Adaboost classifier.

    PubMed

    Mustafa, Rashed; Min, Yang; Zhu, Dingju

    2014-01-01

    Large exposure of skin area of an image is considered obscene. This only fact may lead to many false images having skin-like objects and may not detect those images which have partially exposed skin area but have exposed erotogenic human body parts. This paper presents a novel method for detecting nipples from pornographic image contents. Nipple is considered as an erotogenic organ to identify pornographic contents from images. In this research Gentle Adaboost (GAB) haar-cascade classifier and haar-like features used for ensuring detection accuracy. Skin filter prior to detection made the system more robust. The experiment showed that, considering accuracy, haar-cascade classifier performs well, but in order to satisfy detection time, train-cascade classifier is suitable. To validate the results, we used 1198 positive samples containing nipple objects and 1995 negative images. The detection rates for haar-cascade and train-cascade classifiers are 0.9875 and 0.8429, respectively. The detection time for haar-cascade is 0.162 seconds and is 0.127 seconds for train-cascade classifier.

  5. Preprocessing of PHERMEX flash radiographic images with Haar and adaptive filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brolley, J.E.

    1978-11-01

    Work on image preparation has continued with the application of high-sequency boosting via Haar filtering. This is useful in developing line or edge structures. Widrow LMS adaptive filtering has also been shown to be useful in developing edge structure in special problems. Shadow effects can be obtained with the latter which may be useful for some problems. Combined Haar and adaptive filtering is illustrated for a PHERMEX image.

  6. The convergence of double Fourier-Haar series over homothetic copies of sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oniani, G. G., E-mail: oniani@atsu.edu.ge

    The paper is concerned with the convergence of double Fourier- Haar series with partial sums taken over homothetic copies of a given bounded set W⊂R{sub +}{sup 2} containing the intersection of some neighbourhood of the origin with R{sub +}{sup 2}. It is proved that for a set W from a fairly broad class (in particular, for convex W) there are two alternatives: either the Fourier-Haar series of an arbitrary function f∈L([0,1]{sup 2}) converges almost everywhere or Lln{sup +} L([0,1]{sup 2}) is the best integral class in which the double Fourier-Haar series converges almost everywhere. Furthermore, a characteristic property is obtained, whichmore » distinguishes which of the two alternatives is realized for a given W. Bibliography: 12 titles. (paper)« less

  7. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification

    NASA Astrophysics Data System (ADS)

    Sharif, I.; Khare, S.

    2014-11-01

    With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.

  8. Testing of Haar-Like Feature in Region of Interest Detection for Automated Target Recognition (ATR) System

    NASA Technical Reports Server (NTRS)

    Zhang, Yuhan; Lu, Dr. Thomas

    2010-01-01

    The objectives of this project were to develop a ROI (Region of Interest) detector using Haar-like feature similar to the face detection in Intel's OpenCV library, implement it in Matlab code, and test the performance of the new ROI detector against the existing ROI detector that uses Optimal Trade-off Maximum Average Correlation Height filter (OTMACH). The ROI detector included 3 parts: 1, Automated Haar-like feature selection in finding a small set of the most relevant Haar-like features for detecting ROIs that contained a target. 2, Having the small set of Haar-like features from the last step, a neural network needed to be trained to recognize ROIs with targets by taking the Haar-like features as inputs. 3, using the trained neural network from the last step, a filtering method needed to be developed to process the neural network responses into a small set of regions of interests. This needed to be coded in Matlab. All the 3 parts needed to be coded in Matlab. The parameters in the detector needed to be trained by machine learning and tested with specific datasets. Since OpenCV library and Haar-like feature were not available in Matlab, the Haar-like feature calculation needed to be implemented in Matlab. The codes for Adaptive Boosting and max/min filters in Matlab could to be found from the Internet but needed to be integrated to serve the purpose of this project. The performance of the new detector was tested by comparing the accuracy and the speed of the new detector against the existing OTMACH detector. The speed was referred as the average speed to find the regions of interests in an image. The accuracy was measured by the number of false positives (false alarms) at the same detection rate between the two detectors.

  9. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  10. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    NASA Astrophysics Data System (ADS)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  11. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  12. Alcoholism detection in magnetic resonance imaging by Haar wavelet transform and back propagation neural network

    NASA Astrophysics Data System (ADS)

    Yu, Yali; Wang, Mengxia; Lima, Dimas

    2018-04-01

    In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.

  13. On orthogonal projectors induced by compact groups and Haar measures

    NASA Astrophysics Data System (ADS)

    Niezgoda, Marek

    2008-02-01

    We study the difference of two orthogonal projectors induced by compact groups of linear operators acting on a vector space. An upper bound for the difference is derived using the Haar measures of the groups. A particular attention is paid to finite groups. Some applications are given for complex matrices and unitarily invariant norms. Majorization inequalities of Fan and Hoffmann and of Causey are rediscovered.

  14. Parametric instability analysis of truncated conical shells using the Haar wavelet method

    NASA Astrophysics Data System (ADS)

    Dai, Qiyi; Cao, Qingjie

    2018-05-01

    In this paper, the Haar wavelet method is employed to analyze the parametric instability of truncated conical shells under static and time dependent periodic axial loads. The present work is based on the Love first-approximation theory for classical thin shells. The displacement field is expressed as the Haar wavelet series in the axial direction and trigonometric functions in the circumferential direction. Then the partial differential equations are reduced into a system of coupled Mathieu-type ordinary differential equations describing dynamic instability behavior of the shell. Using Bolotin's method, the first-order and second-order approximations of principal instability regions are determined. The correctness of present method is examined by comparing the results with those in the literature and very good agreement is observed. The difference between the first-order and second-order approximations of principal instability regions for tensile and compressive loads is also investigated. Finally, numerical results are presented to bring out the influences of various parameters like static load factors, boundary conditions and shell geometrical characteristics on the domains of parametric instability of conical shells.

  15. 75 FR 57664 - Airworthiness Directives; G ROB-WERKE Model G120A Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-22

    ... Airworthiness Directives; G ROB-WERKE Model G120A Airplanes AGENCY: Federal Aviation Administration (FAA), DOT.... The authority citation for part 39 continues to read as follows: Authority: 49 U.S.C. 106(g), 40113..., PART B, dated May 18, 2010. (g) You may at any time complete GROB Aircraft AG Repair Instruction No. RI...

  16. ON THE UNIQUENESS OF HAAR SERIES CONVERGENT IN THE METRICS OF L_p\\lbrack0,\\,1\\rbrack, 0, AND IN MEASURE

    NASA Astrophysics Data System (ADS)

    Talalyan, A. A.

    1986-02-01

    It is established that if the partial sums S_n(x) of a Haar series \\sum a_n\\chi_n(x) converge to f(x)\\in L_p\\lbrack0,\\,1\\rbrack, 0, at the rate \\int_0^1\\vert S_n-f\\vert^p dx=o(1/n^{1/p}), then f(x) is A-integrable and a_n=(A)\\int_0^1 f(x)\\chi_n(x)dx, for n=1,\\,2,\\,\\dots. Analogous theorems are proved also for the case where Haar series converge in the metric of L_p\\lbrack0,\\,1\\rbrack, 0, over some subsequences of partial sums. The sharpness of these theorems is also proved.Bibliography: 10 titles.

  17. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  18. Application of the Haar Wavelet to the Analysis of Plasma and Atmospheric Fluctuations

    NASA Astrophysics Data System (ADS)

    Maslov, S. A.; Kharchevsky, A. A.; Smirnov, V. A.

    2017-12-01

    The parameters of turbulence measured by means of a Doppler reflectometer at the plasma periphery in an L-2M stellarator and in atmospheric vortices (typhoons and tornadoes) are investigated using the wavelet methods with involvement of the Haar function. The periods of time taken for the transition (a bound of parameters) to occur in the L-2M stellarator plasma and in atmospheric processes are estimated. It is shown that high-and low-frequency oscillations of certain parameters, in particular, pressure, that occur in atmospheric vortices decay or increase at different moments of time, whereas the density fluctuation amplitudes that occur in plasma at different frequencies vary in a synchronous manner.

  19. SU-F-BRB-12: A Novel Haar Wavelet Based Approach to Deliver Non-Coplanar Intensity Modulated Radiotherapy Using Sparse Orthogonal Collimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, D; Ruan, D; Low, D

    2015-06-15

    Purpose: Existing efforts to replace complex multileaf collimator (MLC) by simple jaws for intensity modulated radiation therapy (IMRT) resulted in unacceptable compromise in plan quality and delivery efficiency. We introduce a novel fluence map segmentation method based on compressed sensing for plan delivery using a simplified sparse orthogonal collimator (SOC) on the 4π non-coplanar radiotherapy platform. Methods: 4π plans with varying prescription doses were first created by automatically selecting and optimizing 20 non-coplanar beams for 2 GBM, 2 head & neck, and 2 lung patients. To create deliverable 4π plans using SOC, which are two pairs of orthogonal collimators withmore » 1 to 4 leaves in each collimator bank, a Haar Fluence Optimization (HFO) method was used to regulate the number of Haar wavelet coefficients while maximizing the dose fidelity to the ideal prescription. The plans were directly stratified utilizing the optimized Haar wavelet rectangular basis. A matching number of deliverable segments were stratified for the MLC-based plans. Results: Compared to the MLC-based 4π plans, the SOC-based 4π plans increased the average PTV dose homogeneity from 0.811 to 0.913. PTV D98 and D99 were improved by 3.53% and 5.60% of the corresponding prescription doses. The average mean and maximal OAR doses slightly increased by 0.57% and 2.57% of the prescription doses. The average number of segments ranged between 5 and 30 per beam. The collimator travel time to create the segments decreased with increasing leaf numbers in the SOC. The two and four leaf designs were 1.71 and 1.93 times more efficient, on average, than the single leaf design. Conclusion: The innovative dose domain optimization based on compressed sensing enables uncompromised 4π non-coplanar IMRT dose delivery using simple rectangular segments that are deliverable using a sparse orthogonal collimator, which only requires 8 to 16 leaves yet is unlimited in modulation resolution. This work

  20. ON THE BASIS PROPERTY OF THE HAAR SYSTEM IN THE SPACE \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) AND THE PRINCIPLE OF LOCALIZATION IN THE MEAN

    NASA Astrophysics Data System (ADS)

    Sharapudinov, I. I.

    1987-02-01

    Let p=p(t) be a measurable function defined on \\lbrack0,\\,1\\rbrack. If p(t) is essentially bounded on \\lbrack0,\\,1\\rbrack, denote by \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) the set of measurable functions f defined on \\lbrack0,\\,1\\rbrack for which \\int_0^1\\vert f(t)\\vert^{p(t)}dt<\\infty. The space \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) with p(t)\\geqslant 1 is a normed space with norm \\displaystyle \\vert\\vert f\\vert\\vert _p=\\inf\\bigg\\{\\alpha>0:\\,\\int_0^1\\bigg\\vert\\frac{f(t)}{\\alpha}\\bigg\\vert^{p(t)}dt\\leqslant1\\bigg\\}.This paper examines the question of whether the Haar system is a basis in \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack). Conditions that are in a certain sense definitive on the function p(t) in order that the Haar system be a basis of \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) are obtained. The concept of a localization principle in the mean is introduced, and its connection with the space \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) is exhibited.Bibliography: 2 titles.

  1. Secure annotation for medical images based on reversible watermarking in the Integer Fibonacci-Haar transform domain

    NASA Astrophysics Data System (ADS)

    Battisti, F.; Carli, M.; Neri, A.

    2011-03-01

    The increasing use of digital image-based applications is resulting in huge databases that are often difficult to use and prone to misuse and privacy concerns. These issues are especially crucial in medical applications. The most commonly adopted solution is the encryption of both the image and the patient data in separate files that are then linked. This practice results to be inefficient since, in order to retrieve patient data or analysis details, it is necessary to decrypt both files. In this contribution, an alternative solution for secure medical image annotation is presented. The proposed framework is based on the joint use of a key-dependent wavelet transform, the Integer Fibonacci-Haar transform, of a secure cryptographic scheme, and of a reversible watermarking scheme. The system allows: i) the insertion of the patient data into the encrypted image without requiring the knowledge of the original image, ii) the encryption of annotated images without causing loss in the embedded information, and iii) due to the complete reversibility of the process, it allows recovering the original image after the mark removal. Experimental results show the effectiveness of the proposed scheme.

  2. Fusion of GFP and phase contrast images with complex shearlet transform and Haar wavelet-based energy rule.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren

    2018-03-14

    Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.

  3. Approximation of functions in variable-exponent Lebesgue and Sobolev spaces by finite Fourier-Haar series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharapudinov, I I

    2014-02-28

    The paper deals with the space L{sup p(x)} consisting of classes of real measurable functions f(x) on [0,1] with finite integral ∫{sub 0}{sup 1}|f(x)|{sup p(x)} dx. If 1≤p(x)≤ p-bar <∞, then the space L{sup p(x)} can be made into a Banach space with the norm ∥f∥{sub p(⋅)}=inf(α > 0:∫{sub 0}{sup 1}|f(x)/α|{sup p(x)} dx≤ 1). The inequality ∥f−Q{sub n}(f)∥{sub p(⋅)}≤c(p)Ω(f,1/n){sub p(⋅)}, which is an analogue of the first Jackson theorem, is shown to hold for the finite Fourier-Haar series Q{sub n}(f), provided that the variable exponent p(x) satisfies the condition |p(x)−p(y)|ln (1/|x−y|)≤ c. Here, Ω(f,δ){sub p(⋅)} is the modulus of continuity in L{sup p(x)} defined inmore » terms of Steklov functions. If the function f(x) lies in the Sobolev space W{sub p(⋅)}{sup 1} with variable exponent p(x), it is shown that ∥f−Q{sub n}(f)∥{sub p(⋅)}≤c(p)/n∥f{sup ′}∥{sub p(⋅)}. Methods for estimating the deviation |f(x)−Q{sub n}(f,x)| for f(x)∈W{sub p(⋅)}{sup 1} at a given point x∈[0,1] are also examined. The value of sup{sub f∈W{sub p{sup 1}(1)}}|f(x)−Q{sub n}(f,x)| is calculated in the case when p(x)≡p= const, where W{sub p}{sup 1}(1)=(f∈W{sub p}{sup 1}:∥f{sup ′}∥{sub p(⋅)}≤1). Bibliography: 17 titles.« less

  4. 75 FR 47548 - Brass Sheet and Strip from Germany: Notice of Rescission of Antidumping Duty Administrative Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... Wieland-Werke AG, the respondent and German manufacturer of brass sheet and strip, we are now [[Page 47549... received a request from Wieland-Werke AG, a German producer and exporter, that the Department conduct an...

  5. SU-F-J-27: Segmentation of Prostate CBCT Images with Implanted Calypso Transponders Using Double Haar Wavelet Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Saleh, Z; Tang, X

    Purpose: Segmentation of prostate CBCT images is an essential step towards real-time adaptive radiotherapy. It is challenging For Calypso patients, as more artifacts are generated by the beacon transponders. We herein propose a novel wavelet-based segmentation algorithm for rectum, bladder, and prostate of CBCT images with implanted Calypso transponders. Methods: Five hypofractionated prostate patients with daily CBCT were studied. Each patient had 3 Calypso transponder beacons implanted, and the patients were setup and treated with Calypso tracking system. Two sets of CBCT images from each patient were studied. The structures (i.e. rectum, bladder, and prostate) were contoured by a trainedmore » expert, and these served as ground truth. For a given CBCT, the moving window-based Double Haar transformation is applied first to obtain the wavelet coefficients. Based on a user defined point in the object of interest, a cluster algorithm based adaptive thresholding is applied to the low frequency components of the wavelet coefficients, and a Lee filter theory based adaptive thresholding is applied to the high frequency components. For the next step, the wavelet reconstruction is applied to the thresholded wavelet coefficients. A binary/segmented image of the object of interest is therefore obtained. DICE, sensitivity, inclusiveness and ΔV were used to evaluate the segmentation result. Results: Considering all patients, the bladder has the DICE, sensitivity, inclusiveness, and ΔV ranges of [0.81–0.95], [0.76–0.99], [0.83–0.94], [0.02–0.21]. For prostate, the ranges are [0.77–0.93], [0.84–0.97], [0.68–0.92], [0.1–0.46]. For rectum, the ranges are [0.72–0.93], [0.57–0.99], [0.73–0.98], [0.03–0.42]. Conclusion: The proposed algorithm appeared effective segmenting prostate CBCT images with the present of the Calypso artifacts. However, it is not robust in two scenarios: 1) rectum with significant amount of gas; 2) prostate with very low contrast

  6. Probability density functions for CP-violating rephasing invariants

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Giasson, Nicolas; Marleau, Luc

    2018-05-01

    The implications of the anarchy principle on CP violation in the lepton sector are investigated. A systematic method is introduced to compute the probability density functions for the CP-violating rephasing invariants of the PMNS matrix from the Haar measure relevant to the anarchy principle. Contrary to the CKM matrix which is hierarchical, it is shown that the Haar measure, and hence the anarchy principle, are very likely to lead to the observed PMNS matrix. Predictions on the CP-violating Dirac rephasing invariant |jD | and Majorana rephasing invariant |j1 | are also obtained. They correspond to 〈 |jD | 〉 Haar = π / 105 ≈ 0.030 and 〈 |j1 | 〉 Haar = 1 / (6 π) ≈ 0.053 respectively, in agreement with the experimental hint from T2K of | jDexp | ≈ 0.032 ± 0.005 (or ≈ 0.033 ± 0.003) for the normal (or inverted) hierarchy.

  7. Gesammelte Werke / Collected Works

    NASA Astrophysics Data System (ADS)

    Schwarzschild, Karl; Voigt, Hans-Heinrich

    Der bekannte Astronom Karl Schwarzschild (1873-1916) gilt als der Begründer der Astrophysik und als hervorragender Forscher mit einer erstaunlichen Bandbreite seiner Interessen. Arbeiten zur Himmelsmechanik, Elektrodynamik und Relativitätstheorie weisen ihn als vorzüglichen Mathematiker und Physiker auf der Höhe seiner Zeit aus. Untersuchungen zur Photographischen Photometrie, Optik und Spektroskopie zeigen den versierten Beobachter, der sein Meßinstrumentarium beherrscht, und schließlich arbeitete Schwarzschild als Astrophysiker an Sternatmosphären, Kometen, Struktur und Dynamik von Sternsystemen. Die in seinem kurzen Leben entstandene Fülle an wissenschaftlichen Arbeiten ist in drei Bänden der Gesamtausgabe gesammelt, ergänzt durch biographisches Material, Annotationen von Fachleuten und einen Essay des Nobelpreisträgers S. Chandrasekhar. The well-known astronomer Karl Schwarzschild (1873-1916) is regarded as the founder of astrophysics and as an exceptionally talented researcher whose interests spanned a remarkably broad spectrum. His work on celestial mechanics, electrodynamics, and relativity theory demonstrates his great abilities as a mathematician and physicist who significantly influenced the science of his times. His investigations of photographic photometry, optics, and spectroscopy display his strengths as an observer who knew his instruments. But above all Schwarzschild pursued questions of astrophysics, addressing in particular stellar atmospheres, comets, and the structure and dynamics of stellar systems. The host of scientific works that he authored in his short life is now collected in the form of this three-volume complete works; it is supplemented by biographical material, notes from some of todays experts, and an essay by the Nobel Laureate S. Chandrasekhar.

  8. [Recognition of landscape characteristic scale based on two-dimension wavelet analysis].

    PubMed

    Gao, Yan-Ni; Chen, Wei; He, Xing-Yuan; Li, Xiao-Yu

    2010-06-01

    Three wavelet bases, i. e., Haar, Daubechies, and Symlet, were chosen to analyze the validity of two-dimension wavelet analysis in recognizing the characteristic scales of the urban, peri-urban, and rural landscapes of Shenyang. Owing to the transform scale of two-dimension wavelet must be the integer power of 2, some characteristic scales cannot be accurately recognized. Therefore, the pixel resolution of images was resampled to 3, 3.5, 4, and 4.5 m to densify the scale in analysis. It was shown that two-dimension wavelet analysis worked effectively in checking characteristic scale. Haar, Daubechies, and Symle were the optimal wavelet bases to the peri-urban landscape, urban landscape, and rural landscape, respectively. Both Haar basis and Symlet basis played good roles in recognizing the fine characteristic scale of rural landscape and in detecting the boundary of peri-urban landscape. Daubechies basis and Symlet basis could be also used to detect the boundary of urban landscape and rural landscape, respectively.

  9. Forward collision warning based on kernelized correlation filters

    NASA Astrophysics Data System (ADS)

    Pu, Jinchuan; Liu, Jun; Zhao, Yong

    2017-07-01

    A vehicle detection and tracking system is one of the indispensable methods to reduce the occurrence of traffic accidents. The nearest vehicle is the most likely to cause harm to us. So, this paper will do more research on about the nearest vehicle in the region of interest (ROI). For this system, high accuracy, real-time and intelligence are the basic requirement. In this paper, we set up a system that combines the advanced KCF tracking algorithm with the HaarAdaBoost detection algorithm. The KCF algorithm reduces computation time and increase the speed through the cyclic shift and diagonalization. This algorithm satisfies the real-time requirement. At the same time, Haar features also have the same advantage of simple operation and high speed for detection. The combination of this two algorithm contribute to an obvious improvement of the system running rate comparing with previous works. The detection result of the HaarAdaBoost classifier provides the initial value for the KCF algorithm. This fact optimizes KCF algorithm flaws that manual car marking in the initial phase, which is more scientific and more intelligent. Haar detection and KCF tracking with Histogram of Oriented Gradient (HOG) ensures the accuracy of the system. We evaluate the performance of framework on dataset that were self-collected. The experimental results demonstrate that the proposed method is robust and real-time. The algorithm can effectively adapt to illumination variation, even in the night it can meet the detection and tracking requirements, which is an improvement compared with the previous work.

  10. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  11. An uncertainty principle for unimodular quantum groups

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crann, Jason; Université Lille 1 - Sciences et Technologies, UFR de Mathématiques, Laboratoire de Mathématiques Paul Painlevé - UMR CNRS 8524, 59655 Villeneuve d'Ascq Cédex; Kalantar, Mehrdad, E-mail: jason-crann@carleton.ca, E-mail: mkalanta@math.carleton.ca

    2014-08-15

    We present a generalization of Hirschman's entropic uncertainty principle for locally compact Abelian groups to unimodular locally compact quantum groups. As a corollary, we strengthen a well-known uncertainty principle for compact groups, and generalize the relation to compact quantum groups of Kac type. We also establish the complementarity of finite-dimensional quantum group algebras. In the non-unimodular setting, we obtain an uncertainty relation for arbitrary locally compact groups using the relative entropy with respect to the Haar weight as the measure of uncertainty. We also show that when restricted to q-traces of discrete quantum groups, the relative entropy with respect tomore » the Haar weight reduces to the canonical entropy of the random walk generated by the state.« less

  12. Method and system for determining precursors of health abnormalities from processing medical records

    DOEpatents

    None, None

    2013-06-25

    Medical reports are converted to document vectors in computing apparatus and sampled by applying a maximum variation sampling function including a fitness function to the document vectors to reduce a number of medical records being processed and to increase the diversity of the medical records being processed. Linguistic phrases are extracted from the medical records and converted to s-grams. A Haar wavelet function is applied to the s-grams over the preselected time interval; and the coefficient results of the Haar wavelet function are examined for patterns representing the likelihood of health abnormalities. This confirms certain s-grams as precursors of the health abnormality and a parameter can be calculated in relation to the occurrence of such a health abnormality.

  13. Alternative Fuels Data Center

    Science.gov Websites

    Sonoma Clean Power (SCP) customers are eligible to receive a free JuiceNet-enabled EVSE from eMotorWerks eligible to receive a free JuicePlug (smart grid adapter) to convert to a JuiceNet-enabled EVSE. Customers

  14. 76 FR 42681 - Brass Sheet and Strip From Germany: Notice of Rescission of Antidumping Duty Administrative Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Germany: Notice of Rescission of Antidumping Duty Administrative Review AGENCY: Import Administration... antidumping duty order on brass sheet and strip from Germany. The review covers one producer/exporter of brass sheet and strip from Germany, Wieland-Werke AG (``Wieland''). Based on a timely withdrawal of the...

  15. A modified multiscale peak alignment method combined with trilinear decomposition to study the volatile/heat-labile components in Ligusticum chuanxiong Hort - Cyperus rotundus rhizomes by HS-SPME-GC/MS.

    PubMed

    He, Min; Yan, Pan; Yang, Zhi-Yu; Zhang, Zhi-Min; Yang, Tian-Biao; Hong, Liang

    2018-03-15

    Head Space/Solid Phase Micro-Extraction (HS-SPME) coupled with Gas Chromatography/Mass Spectrometer (GC/MS) was used to determine the volatile/heat-labile components in Ligusticum chuanxiong Hort - Cyperus rotundus rhizomes. Facing co-eluting peaks in k samples, a trilinear structure was reconstructed to obtain the second-order advantage. The retention time (RT) shift with multi-channel detection signals for different samples has been vital in maintaining the trilinear structure, thus a modified multiscale peak alignment (mMSPA) method was proposed in this paper. The peak position and peak width of representative ion profile were firstly detected by mMSPA using Continuous Wavelet Transform with Haar wavelet as the mother wavelet (Haar CWT). Then, the raw shift was confirmed by Fast Fourier Transform (FFT) cross correlation calculation. To obtain the optimal shift, Haar CWT was again used to detect the subtle deviations and be amalgamated in calculation. Here, to ensure there is no peaks shape alternation, the alignment was performed in local domains of data matrices, and all data points in the peak zone were moved via linear interpolation in non-peak parts. Finally, chemical components of interest in Ligusticum chuanxiong Hort - Cyperus rotundus rhizomes were analyzed by HS-SPME-GCMS and mMSPA-alternating trilinear decomposition (ATLD) resolution. As a result, the concentration variation between herbs and their pharmaceutical products can provide a scientific basic for the quality standard establishment of traditional Chinese medicines. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A study of stationarity in time series by using wavelet transform

    NASA Astrophysics Data System (ADS)

    Dghais, Amel Abdoullah Ahmed; Ismail, Mohd Tahir

    2014-07-01

    In this work the core objective is to apply discrete wavelet transform (DWT) functions namely Haar, Daubechies, Symmlet, Coiflet and discrete approximation of the meyer wavelets in non-stationary financial time series data from US stock market (DJIA30). The data consists of 2048 daily data of closing index starting from December 17, 2004 until October 23, 2012. From the unit root test the results show that the data is non stationary in the level. In order to study the stationarity of a time series, the autocorrelation function (ACF) is used. Results indicate that, Haar function is the lowest function to obtain noisy series as compared to Daubechies, Symmlet, Coiflet and discrete approximation of the meyer wavelets. In addition, the original data after decomposition by DWT is less noisy series than decomposition by DWT for return time series.

  17. Rapportage POEMA-2 (Functie Faalanalyse, Database, Elektronica en Afdichtingen) (Report of POEMA-2 (Function Failure Analysis, Database, Electronics and Sealings))

    DTIC Science & Technology

    2006-12-01

    schokversterker/overdrachtslading materiaal onbekend onbekende springstof Aanvuurlading/vlamversterker -Zwartbuskruit Nabijheidselectronica electronica ...onbelkend - Batterij chroomzuur- Sign aalve rwrki ngseenheid electronica - Mechanisohe kortsluitschakelaar onbelkend metaal - Wapeningssamenstel onbelkend...weergave van de electronica in de Medea nabijheidsbuis ~ Omdat de buis is uitgerust met een test-adapter kan de werking van de elektronica

  18. Konkordanz zu Schillers aesthetischen und philosophischen Schriften (Concordance of Schiller's Aesthetic and Philosophical Writings).

    ERIC Educational Resources Information Center

    Sanford, Gerlinde Ulm

    This document provides a computer-based concordance of the vocabulary used in Friedrich von Schiller's "Aesthetic and Philosophical Writings" as they appear in Volumes 20 and 21 of Schiller's "Werke," 1967 edition, edited by Benno von Wiese. The first section includes the entire text, each sentence numbered for research…

  19. Psychosynthesis Workbook

    ERIC Educational Resources Information Center

    Editor

    1975-01-01

    The theme for this issue is "gaining the freedom to be our true selves." The issue includes a "Who-Am-I" exercise, an overview by Betsie Carter-Haar on identification and integration of the self, and practical exercises for self-development. (HMV)

  20. Pattern recognition of concrete surface cracks and defects using integrated image processing algorithms

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Hortinela, Carlos C.; Garcia, Ramon G.; Baylon, Sunnycille; Ignacio, Alexander Joshua; Rivera, Marco Antonio; Sebastian, Jaimie

    2017-06-01

    Pattern recognition of concrete surface crack defects is very important in determining stability of structure like building, roads or bridges. Surface crack is one of the subjects in inspection, diagnosis, and maintenance as well as life prediction for the safety of the structures. Traditionally determining defects and cracks on concrete surfaces are done manually by inspection. Moreover, any internal defects on the concrete would require destructive testing for detection. The researchers created an automated surface crack detection for concrete using image processing techniques including Hough transform, LoG weighted, Dilation, Grayscale, Canny Edge Detection and Haar Wavelet Transform. An automatic surface crack detection robot is designed to capture the concrete surface by sectoring method. Surface crack classification was done with the use of Haar trained cascade object detector that uses both positive samples and negative samples which proved that it is possible to effectively identify the surface crack defects.

  1. JPRS Report, East Europe

    DTIC Science & Technology

    1990-04-27

    12.5 t) were supplied by VEB Klement- production, the second shipbuilding consultation on Gottwald-Werk in Schwerin (photos: A. Prehn /D. See- planning...Industry Y.V. Figure 10. "Walter Ulbricht" full container ship (photo: Koksanov. On 25 September, the minister (on the left in A. Prehn ). JPRS-EER-90

  2. NVAP-M Data and Information

    Atmospheric Science Data Center

    2016-04-27

    ... Vonder Haar, Science and Technology Corp.   The NASA MEaSUREs program began in 2008 and has the goal of creating stable, ... observations."  Geophys. Res. Lett. ,  39 , L16802,  doi:10.1029/2012GL052094   The heritage NASA Water Vapor Project ...

  3. Schools for Cities: Urban Strategies. NEA Series on Design.

    ERIC Educational Resources Information Center

    Haar, Sharon, Ed.

    This monograph presents papers from the 2000 Mayors' Institute on City Design and the public forum that followed it. Essays include: "Schools for Cities: Urban Strategies" (Sharon Haar); "Reenvisioning Schools; The Mayors' Questions" (Leah Ray); "Why Johnny Can't Walk to School" (Constance E. Beaumont); "Lessons…

  4. Unitary Operators on the Document Space.

    ERIC Educational Resources Information Center

    Hoenkamp, Eduard

    2003-01-01

    Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)

  5. Compressed normalized block difference for object tracking

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  6. A multi-view face recognition system based on cascade face detector and improved Dlib

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  7. 78 FR 8446 - Airworthiness Directives; GROB-WERKE Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... Jersey Avenue SE., Washington, DC 20590. FOR FURTHER INFORMATION CONTACT: Taylor Martin, Aerospace Engineer, FAA, Small Airplane Directorate, 901 Locust, Room 301, Kansas City, Missouri 64106; telephone...

  8. 78 FR 2910 - Airworthiness Directives; GROB-WERKE Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-15

    ... condition on an aviation product. The MCAI describes the unsafe condition as cracks in the elevator trim tab... for the specified products. The MCAI states: On several Grob G 115 aeroplanes, elevator trim tab arms... the rear edge of the trim tab arm. This condition, if not detected and corrected, could lead to...

  9. 78 FR 23112 - Airworthiness Directives; Grob-Werke Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-18

    ... aviation product. The MCAI describes the unsafe condition as cracks in the elevator trim tab arms on... MCAI states: On several Grob G 115 aeroplanes, elevator trim tab arms Part Number (P/N) 115E-3758 have been found cracked, from a rear mounting hole (either L/H or R/H) to the rear edge of the trim tab arm...

  10. 78 FR 21082 - Airworthiness Directives; GROB-WERKE Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-09

    ... of proposed rulemaking (NPRM). SUMMARY: We propose to adopt a new airworthiness directive (AD) for... cable routing causing electrical shorting behind the left-hand (LH) cockpit instrument panel. We are... Operations, M-30, West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue SE., Washington, DC 20590...

  11. Change and Continuity in Librarianship: Approaching the Twenty-First Century. Proceedings of the 40th Military Librarians Workshop, 20-22 November 1996, Annapolis, Maryland,

    DTIC Science & Technology

    1996-11-01

    speakers Walt Crawford (Keynote), speaking on " Millennial Librarianship;" Dr. Keith Swigger, Dean of the Graduate School of Library and Information...1 --Richard Hume Werking Millennial Librarianship: Maintaining the Mix and Avoiding the Hype .................. 2 --Walt Crawford...sustained us both through many months and many drafts. Millennial Librarianship: Maintaining the Mix and Avoiding the Hype by Walt Crawford Senior

  12. A methodology for the analysis of differential coexpression across the human lifespan.

    PubMed

    Gillis, Jesse; Pavlidis, Paul

    2009-09-22

    Differential coexpression is a change in coexpression between genes that may reflect 'rewiring' of transcriptional networks. It has previously been hypothesized that such changes might be occurring over time in the lifespan of an organism. While both coexpression and differential expression of genes have been previously studied in life stage change or aging, differential coexpression has not. Generalizing differential coexpression analysis to many time points presents a methodological challenge. Here we introduce a method for analyzing changes in coexpression across multiple ordered groups (e.g., over time) and extensively test its validity and usefulness. Our method is based on the use of the Haar basis set to efficiently represent changes in coexpression at multiple time scales, and thus represents a principled and generalizable extension of the idea of differential coexpression to life stage data. We used published microarray studies categorized by age to test the methodology. We validated the methodology by testing our ability to reconstruct Gene Ontology (GO) categories using our measure of differential coexpression and compared this result to using coexpression alone. Our method allows significant improvement in characterizing these groups of genes. Further, we examine the statistical properties of our measure of differential coexpression and establish that the results are significant both statistically and by an improvement in semantic similarity. In addition, we found that our method finds more significant changes in gene relationships compared to several other methods of expressing temporal relationships between genes, such as coexpression over time. Differential coexpression over age generates significant and biologically relevant information about the genes producing it. Our Haar basis methodology for determining age-related differential coexpression performs better than other tested methods. The Haar basis set also lends itself to ready interpretation

  13. Sobolev-orthogonal systems of functions associated with an orthogonal system

    NASA Astrophysics Data System (ADS)

    Sharapudinov, I. I.

    2018-02-01

    For every system of functions \\{\\varphi_k(x)\\} which is orthonormal on (a,b) with weight ρ(x) and every positive integer r we construct a new associated system of functions \\{\\varphir,k(x)\\}k=0^∞ which is orthonormal with respect to a Sobolev-type inner product of the form \\displaystyle < f,g >=\\sumν=0r-1f(ν)(a)g(ν)(a)+\\intab f(r)(t)g(r)(t)ρ(t) dt. We study the convergence of Fourier series in the systems \\{\\varphir,k(x)\\}k=0^∞. In the important particular cases of such systems generated by the Haar functions and the Chebyshev polynomials T_n(x)=\\cos(n\\arccos x), we obtain explicit representations for the \\varphir,k(x) that can be used to study their asymptotic properties as k\\to∞ and the approximation properties of Fourier sums in the system \\{\\varphir,k(x)\\}k=0^∞. Special attention is paid to the study of approximation properties of Fourier series in systems of type \\{\\varphir,k(x)\\}k=0^∞ generated by Haar functions and Chebyshev polynomials.

  14. The Validity of Selection and Classification Procedures for Predicting Job Performance.

    DTIC Science & Technology

    1987-04-01

    lacholual or pulley Issues. They cemmunicate Me resulls of special analyses, Iantrim rp or phses of a teak, ad hasm quick macton werk. Paperm r reviw ...51 I. Alternative Selection Procedures ................. 56 J. Meta-Analyses of Validities ............. 58 K . Meta-Analytic Comparisons of...Aptitude Test Battery GM General Maintenance GS General Science GVN Cognitive Ability HS&T Health, Social and Technology K Motor Coordination KFM

  15. Ubi Materia, Ibi Geometria

    DTIC Science & Technology

    2000-09-29

    of the birth of new physics and astronomy , and as contribution to obscure rhetoric in speculative quantum physics texts. In fact, not only...Copernican system has to be valid (Myaterium Cosmographicum). (One might, however, with justification doubt that the system presented by Copernicus in his...Kepleri astronomi Opera Omnia, Vol. I. Editit Christian Frisch. Frankofurti a.M.-Erlangae, Heyder & Zimmer 1858-1871. (Johannes Kepler, Gesammelte Werke

  16. Teacher Educator Identity Emerging within a Teacher Educator Collective

    ERIC Educational Resources Information Center

    Pinnegar, Stefinee; Murphy, M. Shaun

    2011-01-01

    Role theory is based in a conception of a social world wherein various roles are either available or made available and those participating in such worlds shape their identity according to the roles that are made available to them. In positioning theory, Haare and van Langenhove (1998) suggest that teachers are always in the process of…

  17. U.S. (ARRADCOM) Test Results for NATO Round-Robin Test on High Explosives

    DTIC Science & Technology

    1981-05-01

    Round-Robin Test Hersteller: (Manufacturer) Probenbezeichnung: (sample symbol ) Fa. Dynamit Nobel AG (W-Germany) Werk Leverkusen-Schlebusch S - 1...0,050 % 0,004 % Beim Zufügen von 0,002 n KMnO,-Lösung zu einem durch Kochen mit Wasser er-. haltenem Auszug keine Entfärbung in 1 h (no...Hersteller: (manufacturer) Fa. Soc. Nationale des Poudres et Explosifs (SNPE) Poudrerie de Sorgues, Frankreich Probenbezeichnung: (sample symbol

  18. Texture Analysis of Recurrence Plots Based on Wavelets and PSO for Laryngeal Pathologies Detection.

    PubMed

    Souza, Taciana A; Vieira, Vinícius J D; Correia, Suzete E N; Costa, Silvana L N C; de A Costa, Washington C; Souza, Micael A

    2015-01-01

    This paper deals with the discrimination between healthy and pathological speech signals using recurrence plots and wavelet transform with texture features. Approximation and detail coefficients are obtained from the recurrence plots using Haar wavelet transform, considering one decomposition level. The considered laryngeal pathologies are: paralysis, Reinke's edema and nodules. Accuracy rates above 86% were obtained by means of the employed method.

  19. Performance measures for transform data coding.

    NASA Technical Reports Server (NTRS)

    Pearl, J.; Andrews, H. C.; Pratt, W. K.

    1972-01-01

    This paper develops performance criteria for evaluating transform data coding schemes under computational constraints. Computational constraints that conform with the proposed basis-restricted model give rise to suboptimal coding efficiency characterized by a rate-distortion relation R(D) similar in form to the theoretical rate-distortion function. Numerical examples of this performance measure are presented for Fourier, Walsh, Haar, and Karhunen-Loeve transforms.

  20. Parallel and pipeline computation of fast unitary transforms

    NASA Technical Reports Server (NTRS)

    Fino, B. J.; Algazi, V. R.

    1975-01-01

    The letter discusses the parallel and pipeline organization of fast-unitary-transform algorithms such as the fast Fourier transform, and points out the efficiency of a combined parallel-pipeline processor of a transform such as the Haar transform, in which (2 to the n-th power) -1 hardware 'butterflies' generate a transform of order 2 to the n-th power every computation cycle.

  1. A note on parallel and pipeline computation of fast unitary transforms

    NASA Technical Reports Server (NTRS)

    Fino, B. J.; Algazi, V. R.

    1974-01-01

    The parallel and pipeline organization of fast unitary transform algorithms such as the Fast Fourier Transform are discussed. The efficiency is pointed out of a combined parallel-pipeline processor of a transform such as the Haar transform in which 2 to the n minus 1 power hardware butterflies generate a transform of order 2 to the n power every computation cycle.

  2. Discrete wavelet approach to multifractality

    NASA Astrophysics Data System (ADS)

    Isaacson, Susana I.; Gabbanelli, Susana C.; Busch, Jorge R.

    2000-12-01

    The use of wavelet techniques for the multifractal analysis generalizes the box counting approach, and in addition provides information on eventual deviations of multifractal behavior. By the introduction of a wavelet partition function Wq and its corresponding free energy (beta) (q), the discrepancies between (beta) (q) and the multifractal free energy r(q) are shown to be indicative of these deviations. We study with Daubechies wavelets (D4) some 1D examples previously treated with Haar wavelets, and we apply the same ideas to some 2D Monte Carlo configurations, that simulate a solution under the action of an attractive potential. In this last case, we study the influence in the multifractal spectra and partition functions of four physical parameters: the intensity of the pairwise potential, the temperature, the range of the model potential, and the concentration of the solution. The wavelet partition function Wq carries more information about the cluster statistics than the multifractal partition function Zq, and the location of its peaks contributes to the determination of characteristic sales of the measure. In our experiences, the information provided by Daubechies wavelet sis slightly more accurate than the one obtained by Haar wavelets.

  3. A goal-based angular adaptivity method for thermal radiation modelling in non grey media

    NASA Astrophysics Data System (ADS)

    Soucasse, Laurent; Dargaville, Steven; Buchan, Andrew G.; Pain, Christopher C.

    2017-10-01

    This paper investigates for the first time a goal-based angular adaptivity method for thermal radiation transport, suitable for non grey media when the radiation field is coupled with an unsteady flow field through an energy balance. Anisotropic angular adaptivity is achieved by using a Haar wavelet finite element expansion that forms a hierarchical angular basis with compact support and does not require any angular interpolation in space. The novelty of this work lies in (1) the definition of a target functional to compute the goal-based error measure equal to the radiative source term of the energy balance, which is the quantity of interest in the context of coupled flow-radiation calculations; (2) the use of different optimal angular resolutions for each absorption coefficient class, built from a global model of the radiative properties of the medium. The accuracy and efficiency of the goal-based angular adaptivity method is assessed in a coupled flow-radiation problem relevant for air pollution modelling in street canyons. Compared to a uniform Haar wavelet expansion, the adapted resolution uses 5 times fewer angular basis functions and is 6.5 times quicker, given the same accuracy in the radiative source term.

  4. Translations on Eastern Europe, Political, Sociological, and Military Affairs, Number 1435

    DTIC Science & Technology

    1977-08-23

    4 . K. Marx/F. Engels, "Werke" (Works), Vol 4 , Berlin, 1959, p 479. 5 . Cf. PRAVDA, Moscow, 9 October 1976. 6. Ibid. 45 7... live there, i.a., in the 35,000 newly built apartments. The new district is divided into three viable housing areas, with the main center being...biggest tenement houses of the world. 4 ®) In addition to new construction of housing, great attention must therefore be devoted to the

  5. Theoretical Aspects of Target Classification. Lecture Series of the Electromagnetic Wave Propagation Panel and the Consultant and Exchange Programme Held in Rome, Italy on 29-30 June 1987; Neuiberg, Germany on 2-3 July 1987 and Noresund, Norway on 6-7 July 1987.

    DTIC Science & Technology

    1987-06-01

    F1116 81031 Oherpifaffenhofen Wilhelmnshohec Alice 73 Federal Republic of Gierman!, D-3500 Kassel Federal Republic of German %’ CONTENTS Page LIST OF...Report, University of Kassel, 1984 (in German ). 45. H. Shirai and L.B. Felsen, "Modified GTD for Generating Complex Resonances for Flat Strips and Disks...Thompr~son. litJ. t’himnti) Plenum Press. New York 1986. ACK14OWLEDGEMENTS This work has been financially sullpported hv the Stifting Volkswagen Werk

  6. Biological Effects of Acoustic Cavitation

    DTIC Science & Technology

    1985-06-15

    call attention to four important ones: Ter Haar et.al. (25-26] have irradiated live guinea pig legs with therapeutic ultrasound while examining the...eggs and larvae * at various stages in their development with pulsed ultrasound . They have determined that when gas-containing trachea developed in...the organisms, they were extremely suceptable to the ultrasound and large fractions could - be killedý Hemmingsen et.al. [30-311 have observed bubble

  7. Excitation energy spectrum in helium II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, P.; Chan, C.K.

    1979-11-01

    We obtain the roton part of the excitation energy spectrum in He II qualitatively. We point out that the distinct difference between this calculation and that of Parry and Ter Haar is that we do not use the Born approximation in the evaluation of t-matrix elements. We found that in addition to the contribution due to the hard-core part, the attractive potential helps to form the roton dip.

  8. An analogue of Weyl’s law for quantized irreducible generalized flag manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matassa, Marco, E-mail: marco.matassa@gmail.com, E-mail: mmatassa@math.uio.no

    2015-09-15

    We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.

  9. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    DTIC Science & Technology

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  10. Computerized scheme for vertebra detection in CT scout image

    NASA Astrophysics Data System (ADS)

    Guo, Wei; Chen, Qiang; Zhou, Hanxun; Zhang, Guodong; Cong, Lin; Li, Qiang

    2016-03-01

    Our purposes are to develop a vertebra detection scheme for automated scan planning, which would assist radiological technologists in their routine work for the imaging of vertebrae. Because the orientations of vertebrae were various, and the Haar-like features were only employed to represent the subject on the vertical, horizontal, or diagonal directions, we rotated the CT scout image seven times to make the vertebrae roughly horizontal in least one of the rotated images. Then, we employed Adaboost learning algorithm to construct a strong classifier for the vertebra detection by use of Haar-like features, and combined the detection results with the overlapping region according to the number of times they were detected. Finally, most of the false positives were removed by use of the contextual relationship between them. The detection scheme was evaluated on a database with 76 CT scout image. Our detection scheme reported 1.65 false positives per image at a sensitivity of 94.3% for initial detection of vertebral candidates, and then the performance of detection was improved to 0.95 false positives per image at a sensitivity of 98.6% for the further steps of false positive reduction. The proposed scheme achieved a high performance for the detection of vertebrae with different orientations.

  11. Variance stabilization and normalization for one-color microarray data using a data-driven multiscale approach.

    PubMed

    Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A

    2006-10-15

    Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.

  12. Fractal properties and denoising of lidar signals from cirrus clouds

    NASA Astrophysics Data System (ADS)

    van den Heuvel, J. C.; Driesenaar, M. L.; Lerou, R. J. L.

    2000-02-01

    Airborne lidar signals of cirrus clouds are analyzed to determine the cloud structure. Climate modeling and numerical weather prediction benefit from accurate modeling of cirrus clouds. Airborne lidar measurements of the European Lidar in Space Technology Experiment (ELITE) campaign were analyzed by combining shots to obtain the backscatter at constant altitude. The signal at high altitude was analyzed for horizontal structure of cirrus clouds. The power spectrum and the structure function show straight lines on a double logarithmic plot. This behavior is characteristic for a Brownian fractal. Wavelet analysis using the Haar wavelet confirms the fractal aspects. It is shown that the horizontal structure of cirrus can be described by a fractal with a dimension of 1.8 over length scales that vary 4 orders of magnitude. We use the fractal properties in a new denoising method. Denoising is required for future lidar measurements from space that have a low signal to noise ratio. Our wavelet denoising is based on the Haar wavelet and uses the statistical fractal properties of cirrus clouds in a method based on the maximum a posteriori (MAP) probability. This denoising based on wavelets is tested on airborne lidar signals from ELITE using added Gaussian noise. Superior results with respect to averaging are obtained.

  13. User Manual PIRATE Model, Release Beta 1.0. (Handleiding PIRATE model, versie Beta 1.0)

    DTIC Science & Technology

    1998-01-01

    Beperkingen van PIRATE 40 9 . Conclusies en aanbevelingen 41 10. Referenties 42 11 . Ondertekening 43 TNO-rapport FEL-97-A285 Inleiding Bij maritieme...Inhoud Inleiding 7 1.1 Doel van PIRATE 7 1.2 Werking van PIRATE 8 1.3 Opbouw van de handleiding 8 2. Bediening van PIRATE 9 2.1 Systeemvereisten 10...2.2 Installatie 10 2.3 Bediening van de gebruikersinterface 11 2.4 Gebruiken van de figuren in andere programmes 15 3. Radars, radars in PIRATE

  14. A Summary of the Naval Postgraduate School Research Program

    DTIC Science & Technology

    1988-08-30

    Teh.(accepted). F. P. Kel lyr C.-F. Shih , D. L. Reinke, and T. H. Vonder Haart "Metric Statistical Comparison of Objective Cloud Detectors," Er...February 5, 1988, Anaheim, CAP American Meteorological Society# Boston, MA. 211 Publications: C.-F. Shih , M. Wentzel, and T. H. Yonder Haar, (cont... Shih , "Estimation of Meteorological Parameters Over Mesoscale Regions from Satel l ite and In Situ Data." Preprints, Third Conference DR Satellite

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.

    The classical Nadaraya-Watson estimator is shown to solve a generic sensor fusion problem where the underlying sensor error densities are not known but a sample is available. By employing Haar kernels this estimator is shown to yield finite sample guarantees and also to be efficiently computable. Two simulation examples, and a robotics example involving the detection of a door using arrays of ultrasonic and infrared sensors, are presented to illustrate the performance.

  16. Realization of Combined Diagnosis/Treatment System By Ultrasound Strain Measurement-Based Shear Modulus Reconstruction/Imaging Technique Examples With Application on The New Type Interstitial RF Electromagnetic Wave Thermal Therapy

    DTIC Science & Technology

    2001-10-25

    Righetti, J. Ophir, and J. Hazle, “The feasibility of elastographic visualization of HIFU -induced thermal lesions in soft tissues,” Ultrasound in Med...Review article: High intensity focused ultrasound -potential for cancer treatment,” Br. J. Radiol., vol. 68, pp. 1296-1303, 1995. [17] Watkin NA, G...R.. Ter Haar, S. B. Morris, C. R. J. Woodhouse, “The urological applications of focused ultrasound surgery,” Br. J. Urol., vol. 75 (suppl. 1), pp

  17. Robust Face Detection from Still Images

    DTIC Science & Technology

    2014-01-01

    significant change in false acceptance rates. Keywords— face detection; illumination; skin color variation; Haar-like features; OpenCV I. INTRODUCTION... OpenCV and an algorithm which used histogram equalization. The test is performed against 17 subjects under 576 viewing conditions from the extended Yale...original OpenCV algorithm proved the least accurate, having a hit rate of only 75.6%. It also had the lowest FAR but only by a slight margin at 25.2

  18. High Performance Computing for Medical Image Interpretation

    DTIC Science & Technology

    1993-10-01

    programme for some key companies in the health care industries (Dumay et al., (1993)). With the modules developed for the "Smart Surgeon" an anatomical model...Interpretation System" (HIPMI 2S) geoft FEL-TNO do mogelijkheid omn haar expertise amn to bieden amn do Nederlandse, en Europese civiole medische industrie ...Spin-off 23 3 HIGH PERFORMANCE COMPUTING 24 3.1 Introduction 24 3.2 Parallel processing 24 3.3 Artificial Neural Networks 25 3.4 European Industry

  19. Automated real time peg and tool detection for the FLS trainer box.

    PubMed

    Nemani, Arun; Sankaranarayanan, Ganesh

    2012-01-01

    This study proposes a method that effectively tracks trocar tool and peg positions in real time to allow real time assessment of the peg transfer task of the Fundamentals of Laparoscopic Surgery (FLS). By utilizing custom code along with OpenCV libraries, tool and peg positions can be accurately tracked without altering the original setup conditions of the FLS trainer box. This is achieved via a series of image filtration sequences, thresholding functions, and Haar training methods.

  20. Major East German plants enter EEC refining network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aalund, L.R.

    1990-12-24

    The reunification of Germany has brought some 22 million tons/year or 435,000 b/d of crude oil processing capacity into the European Economic Community from the former German Democratic Republic or East Germany. Most of this - 16.2 million tons or 323,000 b/d - comes from two refineries, PCK AG Schwedt and Leuna-Werke AG. Both have entered a period that will test their survival, at least as independent enterprises in their present configurations. PCK has 218,000 b/d of capacity with a mixture of western and East German technology.

  1. Locally Compact Quantum Groups. A von Neumann Algebra Approach

    NASA Astrophysics Data System (ADS)

    Van Daele, Alfons

    2014-08-01

    In this paper, we give an alternative approach to the theory of locally compact quantum groups, as developed by Kustermans and Vaes. We start with a von Neumann algebra and a comultiplication on this von Neumann algebra. We assume that there exist faithful left and right Haar weights. Then we develop the theory within this von Neumann algebra setting. In [Math. Scand. 92 (2003), 68-92] locally compact quantum groups are also studied in the von Neumann algebraic context. This approach is independent of the original C^*-algebraic approach in the sense that the earlier results are not used. However, this paper is not really independent because for many proofs, the reader is referred to the original paper where the C^*-version is developed. In this paper, we give a completely self-contained approach. Moreover, at various points, we do things differently. We have a different treatment of the antipode. It is similar to the original treatment in [Ann. Sci. & #201;cole Norm. Sup. (4) 33 (2000), 837-934]. But together with the fact that we work in the von Neumann algebra framework, it allows us to use an idea from [Rev. Roumaine Math. Pures Appl. 21 (1976), 1411-1449] to obtain the uniqueness of the Haar weights in an early stage. We take advantage of this fact when deriving the other main results in the theory. We also give a slightly different approach to duality. Finally, we collect, in a systematic way, several important formulas. In an appendix, we indicate very briefly how the C^*-approach and the von Neumann algebra approach eventually yield the same objects. The passage from the von Neumann algebra setting to the C^*-algebra setting is more or less standard. For the other direction, we use a new method. It is based on the observation that the Haar weights on the C^*-algebra extend to weights on the double dual with central support and that all these supports are the same. Of course, we get the von Neumann algebra by cutting down the double dual with this unique

  2. Measurement of entanglement entropy in the two-dimensional Potts model using wavelet analysis.

    PubMed

    Tomita, Yusuke

    2018-05-01

    A method is introduced to measure the entanglement entropy using a wavelet analysis. Using this method, the two-dimensional Haar wavelet transform of a configuration of Fortuin-Kasteleyn (FK) clusters is performed. The configuration represents a direct snapshot of spin-spin correlations since spin degrees of freedom are traced out in FK representation. A snapshot of FK clusters loses image information at each coarse-graining process by the wavelet transform. It is shown that the loss of image information measures the entanglement entropy in the Potts model.

  3. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  4. The brain MRI classification problem from wavelets perspective

    NASA Astrophysics Data System (ADS)

    Bendib, Mohamed M.; Merouani, Hayet F.; Diaba, Fatma

    2015-02-01

    Haar and Daubechies 4 (DB4) are the most used wavelets for brain MRI (Magnetic Resonance Imaging) classification. The former is simple and fast to compute while the latter is more complex and offers a better resolution. This paper explores the potential of both of them in performing Normal versus Pathological discrimination on the one hand, and Multiclassification on the other hand. The Whole Brain Atlas is used as a validation database, and the Random Forest (RF) algorithm is employed as a learning approach. The achieved results are discussed and statistically compared.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Hongzhu; Rao, N.S.V.; Protopopescu, V.

    Regression or function classes of Euclidean type with compact support and certain smoothness properties are shown to be PAC learnable by the Nadaraya-Watson estimator based on complete orthonormal systems. While requiring more smoothness properties than typical PAC formulations, this estimator is computationally efficient, easy to implement, and known to perform well in a number of practical applications. The sample sizes necessary for PAC learning of regressions or functions under sup norm cost are derived for a general orthonormal system. The result covers the widely used estimators based on Haar wavelets, trignometric functions, and Daubechies wavelets.

  6. Convergence to equilibrium under a random Hamiltonian.

    PubMed

    Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  7. Convergence to equilibrium under a random Hamiltonian

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  8. Automated detection of new impact sites on Martian surface from HiRISE images

    NASA Astrophysics Data System (ADS)

    Xin, Xin; Di, Kaichang; Wang, Yexin; Wan, Wenhui; Yue, Zongyu

    2017-10-01

    In this study, an automated method for Martian new impact site detection from single images is presented. It first extracts dark areas in full high resolution image, then detects new impact craters within dark areas using a cascade classifier which combines local binary pattern features and Haar-like features trained by an AdaBoost machine learning algorithm. Experimental results using 100 HiRISE images show that the overall detection rate of proposed method is 84.5%, with a true positive rate of 86.9%. The detection rate and true positive rate in the flat regions are 93.0% and 91.5%, respectively.

  9. An efficient numerical scheme for the study of equal width equation

    NASA Astrophysics Data System (ADS)

    Ghafoor, Abdul; Haq, Sirajul

    2018-06-01

    In this work a new numerical scheme is proposed in which Haar wavelet method is coupled with finite difference scheme for the solution of a nonlinear partial differential equation. The scheme transforms the partial differential equation to a system of algebraic equations which can be solved easily. The technique is applied to equal width equation in order to study the behaviour of one, two, three solitary waves, undular bore and soliton collision. For efficiency and accuracy of the scheme, L2 and L∞ norms and invariants are computed. The results obtained are compared with already existing results in literature.

  10. Chaos and random matrices in supersymmetric SYK

    NASA Astrophysics Data System (ADS)

    Hunter-Jones, Nicholas; Liu, Junyu

    2018-05-01

    We use random matrix theory to explore late-time chaos in supersymmetric quantum mechanical systems. Motivated by the recent study of supersymmetric SYK models and their random matrix classification, we consider the Wishart-Laguerre unitary ensemble and compute the spectral form factors and frame potentials to quantify chaos and randomness. Compared to the Gaussian ensembles, we observe the absence of a dip regime in the form factor and a slower approach to Haar-random dynamics. We find agreement between our random matrix analysis and predictions from the supersymmetric SYK model, and discuss the implications for supersymmetric chaotic systems.

  11. Forbidden regimes in the distribution of bipartite quantum correlations due to multiparty entanglement

    NASA Astrophysics Data System (ADS)

    Kumar, Asutosh; Dhar, Himadri Shekhar; Prabhu, R.; Sen(De), Aditi; Sen, Ujjwal

    2017-05-01

    Monogamy is a nonclassical property that limits the distribution of quantum correlation among subparts of a multiparty system. We show that monogamy scores for different quantum correlation measures are bounded above by functions of genuine multipartite entanglement for a large majority of pure multiqubit states. The bound is universal for all three-qubit pure states. We derive necessary conditions to characterize the states that violate the bound, which can also be observed by numerical simulation for a small set of states, generated Haar uniformly. The results indicate that genuine multipartite entanglement restricts the distribution of bipartite quantum correlations in a multiparty system.

  12. Enhancing the performance of cooperative face detector by NFGS

    NASA Astrophysics Data System (ADS)

    Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba

    2015-07-01

    Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.

  13. Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles

    PubMed Central

    Gökçe, Fatih; Üçoluk, Göktürk; Şahin, Erol; Kalkan, Sinan

    2015-01-01

    Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) the detection of intruder mUAVs in protected environments; (ii) sense and avoid purposes on mUAVs or on other aerial vehicles and (iii) multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG) and local binary patterns (LBP) using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032×778 resolution) and 150 ms outdoors (1280×720 resolution) per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms. PMID:26393599

  14. Option pricing from wavelet-filtered financial series

    NASA Astrophysics Data System (ADS)

    de Almeida, V. T. X.; Moriconi, L.

    2012-10-01

    We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.

  15. Invariant measures on multimode quantum Gaussian states

    NASA Astrophysics Data System (ADS)

    Lupo, C.; Mancini, S.; De Pasquale, A.; Facchi, P.; Florio, G.; Pascazio, S.

    2012-12-01

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom—the symplectic eigenvalues—which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  16. Ship detection in panchromatic images: a new method and its DSP implementation

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Wang, Mengfei; Meng, Gang

    2016-03-01

    In this paper, a new ship detection method is proposed after analyzing the characteristics of panchromatic remote sensing images and ship targets. Firstly, AdaBoost(Adaptive Boosting) classifiers trained by Haar features are utilized to make coarse detection of ship targets. Then LSD (Line Segment Detector) is adopted to extract the line features in target slices to make fine detection. Experimental results on a dataset of panchromatic remote sensing images with a spatial resolution of 2m show that the proposed algorithm can achieve high detection rate and low false alarm rate. Meanwhile, the algorithm can meet the needs of practical applications on DSP (Digital Signal Processor).

  17. Bayesian X-ray computed tomography using a three-level hierarchical prior model

    NASA Astrophysics Data System (ADS)

    Wang, Li; Mohammad-Djafari, Ali; Gac, Nicolas

    2017-06-01

    In recent decades X-ray Computed Tomography (CT) image reconstruction has been largely developed in both medical and industrial domain. In this paper, we propose using the Bayesian inference approach with a new hierarchical prior model. In the proposed model, a generalised Student-t distribution is used to enforce the Haar transformation of images to be sparse. Comparisons with some state of the art methods are presented. It is shown that by using the proposed model, the sparsity of sparse representation of images is enforced, so that edges of images are preserved. Simulation results are also provided to demonstrate the effectiveness of the new hierarchical model for reconstruction with fewer projections.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.B.; Clothiaux, E.

    Because of Earth`s gravitational field, its atmosphere is strongly anisotropic with respect to the vertical; the effect of the Earth`s rotation on synoptic wind patterns also causes a more subtle form of anisotropy in the horizontal plane. The authors survey various approaches to statistically robust anisotropy from a wavelet perspective and present a new one adapted to strongly non-isotropic fields that are sampled on a rectangular grid with a large aspect ratio. This novel technique uses an anisotropic version of Multi-Resolution Analysis (MRA) in image analysis; the authors form a tensor product of the standard dyadic Haar basis, where themore » dividing ratio is {lambda}{sub z} = 2, and a nonstandard triadic counterpart, where the dividing ratio is {lambda}{sub x} = 3. The natural support of the field is therefore 2{sup n} pixels (vertically) by 3{sup n} pixels (horizontally) where n is the number of levels in the MRA. The natural triadic basis includes the French top-hat wavelet which resonates with bumps in the field whereas the Haar wavelet responds to ramps or steps. The complete 2D basis has one scaling function and five wavelets. The resulting anisotropic MRA is designed for application to the liquid water content (LWC) field in boundary-layer clouds, as the prevailing wind advects them by a vertically pointing mm-radar system. Spatial correlations are notoriously long-range in cloud structure and the authors use the wavelet coefficients from the new MRA to characterize these correlations in a multifractal analysis scheme. In the present study, the MRA is used (in synthesis mode) to generate fields that mimic cloud structure quite realistically although only a few parameters are used to control the randomness of the LWC`s wavelet coefficients.« less

  19. Eye/head tracking technology to improve HCI with iPad applications.

    PubMed

    Lopez-Basterretxea, Asier; Mendez-Zorrilla, Amaia; Garcia-Zapirain, Begoña

    2015-01-22

    In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad's front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future.

  20. Eye/Head Tracking Technology to Improve HCI with iPad Applications

    PubMed Central

    Lopez-Basterretxea, Asier; Mendez-Zorrilla, Amaia; Garcia-Zapirain, Begoña

    2015-01-01

    In order to improve human computer interaction (HCI) for people with special needs, this paper presents an alternative form of interaction, which uses the iPad's front camera and eye/head tracking technology. With this functional nature/capability operating in the background, the user can control already developed or new applications for the iPad by moving their eyes and/or head. There are many techniques, which are currently used to detect facial features, such as eyes or even the face itself. Open source bookstores exist for such purpose, such as OpenCV, which enable very reliable and accurate detection algorithms to be applied, such as Haar Cascade using very high-level programming. All processing is undertaken in real time, and it is therefore important to pay close attention to the use of limited resources (processing capacity) of devices, such as the iPad. The system was validated in tests involving 22 users of different ages and characteristics (people with dark and light-colored eyes and with/without glasses). These tests are performed to assess user/device interaction and to ascertain whether it works properly. The system obtained an accuracy of between 60% and 100% in the three test exercises taken into consideration. The results showed that the Haar Cascade had a significant effect by detecting faces in 100% of cases, unlike eyes and the pupil where interference (light and shade) evidenced less effectiveness. In addition to ascertaining the effectiveness of the system via these exercises, the demo application has also helped to show that user constraints need not affect the enjoyment and use of a particular type of technology. In short, the results obtained are encouraging and these systems may continue to be developed if extended and updated in the future. PMID:25621603

  1. Short-term data forecasting based on wavelet transformation and chaos theory

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Cunbin; Zhang, Liang

    2017-09-01

    A sketch of wavelet transformation and its application was given. Concerning the characteristics of time sequence, Haar wavelet was used to do data reduction. After processing, the effect of “data nail” on forecasting was reduced. Chaos theory was also introduced, a new chaos time series forecasting flow based on wavelet transformation was proposed. The largest Lyapunov exponent was larger than zero from small data sets, it verified the data change behavior still met chaotic behavior. Based on this, chaos time series to forecast short-term change behavior could be used. At last, the example analysis of the price from a real electricity market showed that the forecasting method increased the precision of the forecasting more effectively and steadily.

  2. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    PubMed Central

    Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou

    2012-01-01

    This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  3. UV Spectrophotometric Simultaneous Determination of Paracetamol and Ibuprofen in Combined Tablets by Derivative and Wavelet Transforms

    PubMed Central

    Hoang, Vu Dang; Ly, Dong Thi Ha; Tho, Nguyen Huu; Minh Thi Nguyen, Hue

    2014-01-01

    The application of first-order derivative and wavelet transforms to UV spectra and ratio spectra was proposed for the simultaneous determination of ibuprofen and paracetamol in their combined tablets. A new hybrid approach on the combined use of first-order derivative and wavelet transforms to spectra was also discussed. In this application, DWT (sym6 and haar), CWT (mexh), and FWT were optimized to give the highest spectral recoveries. Calibration graphs in the linear concentration ranges of ibuprofen (12–32 mg/L) and paracetamol (20–40 mg/L) were obtained by measuring the amplitudes of the transformed signals. Our proposed spectrophotometric methods were statistically compared to HPLC in terms of precision and accuracy. PMID:24949492

  4. UV spectrophotometric simultaneous determination of paracetamol and ibuprofen in combined tablets by derivative and wavelet transforms.

    PubMed

    Hoang, Vu Dang; Ly, Dong Thi Ha; Tho, Nguyen Huu; Nguyen, Hue Minh Thi

    2014-01-01

    The application of first-order derivative and wavelet transforms to UV spectra and ratio spectra was proposed for the simultaneous determination of ibuprofen and paracetamol in their combined tablets. A new hybrid approach on the combined use of first-order derivative and wavelet transforms to spectra was also discussed. In this application, DWT (sym6 and haar), CWT (mexh), and FWT were optimized to give the highest spectral recoveries. Calibration graphs in the linear concentration ranges of ibuprofen (12-32 mg/L) and paracetamol (20-40 mg/L) were obtained by measuring the amplitudes of the transformed signals. Our proposed spectrophotometric methods were statistically compared to HPLC in terms of precision and accuracy.

  5. Measurements of the PVT properties of water to 25 kbars and 1600°C from synthetic fluid inclusions in corundum

    NASA Astrophysics Data System (ADS)

    Brodholt, John P.; Wood, Bernard J.

    1994-05-01

    We have performed experiments to determine the density of water at pressures and temperatures of 9.5 to 25 kbar and 930 to 1600°C, respectively. The experimental method involved growing inclusions of fluid ( ρ < 1.0 gm/ cm3) in synthetic precracked corundum. Density was determined by measuring the homogenization temperature along the liquid-vapour equilibrium curve. Comparison with the many current equations of state for water indicates that, in the P- V- T range of the experiments, the equations of KERRICK and JACOBS (1981) and BRODHOLT and WOOD (1993) provide the best fits. In contrast the steam tables of HAAR et al. (1984) systematically underestimate the molar volumes at high temperatures.

  6. Finger tips detection for two handed gesture recognition

    NASA Astrophysics Data System (ADS)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  7. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.

    2016-01-01

    Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13

  8. Cumulative Effective Hölder Exponent Based Indicator for Real-Time Fetal Heartbeat Analysis during Labour

    NASA Astrophysics Data System (ADS)

    Struzik, Zbigniew R.; van Wijngaarden, Willem J.

    We introduce a special purpose cumulative indicator, capturing in real time the cumulative deviation from the reference level of the exponent h (local roughness, Hölder exponent) of the fetal heartbeat during labour. We verify that the indicator applied to the variability component of the heartbeat coincides with the fetal outcome as determined by blood samples. The variability component is obtained from running real time decomposition of fetal heartbeat into independent components using an adaptation of an oversampled Haar wavelet transform. The particular filters used and resolutions applied are motivated by obstetricial insight/practice. The methodology described has the potential for real-time monitoring of the fetus during labour and for the prediction of the fetal outcome, allerting the attending staff in the case of (threatening) hypoxia.

  9. Compressed multi-block local binary pattern for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  10. Face verification system for Android mobile devices using histogram based features

    NASA Astrophysics Data System (ADS)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  11. Structural health monitoring approach for detecting ice accretion on bridge cable using the Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Andre, Julia; Kiremidjian, Anne; Liao, Yizheng; Georgakis, Christos; Rajagopal, Ram

    2016-04-01

    Ice accretion on cables of bridge structures poses serious risk to the structure as well as to vehicular traffic when the ice falls onto the road. Detection of ice formation, quantification of the amount of ice accumulated, and prediction of icefalls will increase the safety and serviceability of the structure. In this paper, an ice accretion detection algorithm is presented based on the Continuous Wavelet Transform (CWT). In the proposed algorithm, the acceleration signals obtained from bridge cables are transformed using wavelet method. The damage sensitive features (DSFs) are defined as a function of the wavelet energy at specific wavelet scales. It is found that as ice accretes on the cables, the mass of cable increases, thus changing the wavelet energies. Hence, the DSFs can be used to track the change of cables mass. To validate the proposed algorithm, we use the data collected from a laboratory experiment conducted at the Technical University of Denmark (DTU). In this experiment, a cable was placed in a wind tunnel as ice volume grew progressively. Several accelerometers were installed at various locations along the testing cable to collect vibration signals.

  12. [Summary: from "psychical treatment" to psychoanalysis. Remarks on the misdating of an early text of Freud and publication of a previously unnoticed addition to it].

    PubMed

    Fichtner, Gerhard

    2007-01-01

    Freud's early article, "Psychical (or mental) treatment," first appeared in a health textbook for educated lay people. It was included in his Gesammelte Werke with the publication date of 1905. Subsequently, this date was questioned because the text dealt mainly with hypnosis and suggestion, so James Strachey, among others, erroneously changed it to 1890. This error is corrected in the present paper. Until now, no one noticed that a second edition of the textbook, which appeared in 1918-19, contained an amended version of Freud's original article in which he added a summary of psychoanalytic theory and practice. The first edition was published in 1905-06. However, Freud's contribution must have been written at a much earlier date. Its presumed date of composition is discussed. Freud's addition to the original text is reprinted in an appendix for the first time.

  13. A method for compression of intra-cortically-recorded neural signals dedicated to implantable brain-machine interfaces.

    PubMed

    Shaeri, Mohammad Ali; Sodagar, Amir M

    2015-05-01

    This paper proposes an efficient data compression technique dedicated to implantable intra-cortical neural recording devices. The proposed technique benefits from processing neural signals in the Discrete Haar Wavelet Transform space, a new spike extraction approach, and a novel data framing scheme to telemeter the recorded neural information to the outside world. Based on the proposed technique, a 64-channel neural signal processor was designed and prototyped as a part of a wireless implantable extra-cellular neural recording microsystem. Designed in a 0.13- μ m standard CMOS process, the 64-channel neural signal processor reported in this paper occupies ∼ 0.206 mm(2) of silicon area, and consumes 94.18 μW when operating under a 1.2-V supply voltage at a master clock frequency of 1.28 MHz.

  14. Identical synchronization of chaotic secure communication systems with channel induced coherence resonance

    NASA Astrophysics Data System (ADS)

    Sepantaie, Marc M.; Namazi, Nader M.; Sepantaie, Amir M.

    2016-05-01

    This paper is devoted to addressing the synchronization, and detection of random binary data exposed to inherent channel variations existing in Free Space Optical (FSO) communication systems. This task is achieved by utilizing the identical synchronization methodology of Lorenz chaotic communication system, and its synergetic interaction in adversities imposed by the FSO channel. Moreover, the Lorenz system has been analyzed, and revealed to induce Stochastic Resonance (SR) once exposed to Additive White Gaussian Noise (AWGN). In particular, the resiliency of the Lorenz chaotic system, in light of channel adversities, has been attributed to the success of the proposed communication system. Furthermore, this paper advocates the use of Haar wavelet transform for enhanced detection capability of the proposed chaotic communication system, which utilizes Chaotic Parameter Modulation (CPM) technique for means of transmission.

  15. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 × 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  16. Evaluating Mass Analyzers as Candidates for Small, Portable, Rugged Single Point Mass Spectrometers for Analysis of Permanent Gases

    NASA Technical Reports Server (NTRS)

    Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)

    2001-01-01

    For Space Shuttle launch safety, there is a need to monitor the concentration Of H2, He, O2, and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida.

  17. 75 FR 31734 - Airworthiness Directives; GROB-WERKE (Type Certificate Previously Held by BURKHART GROB Luft- und...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-04

    ... manufacturer has received a report of a failed canopy jettison test, during a regular maintenance check. The... jettison test, during a regular maintenance check. The investigation revealed that a cable shroud of the... Information Grob Aircraft AG has issued Service Bulletin No. MSB1078-164, dated July 21, 2009. The actions...

  18. 75 FR 59077 - Airworthiness Directives; GROB-WERKE (Type Certificate Previously Held by BURKHART GROB Luft- und...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-27

    ... product. The MCAI describes the unsafe condition as: The manufacturer has received a report of a failed... manufacturer has received a report of a failed canopy jettison test, during a regular maintenance check. The... take about 3 work-hours and require parts costing $68, for a cost of $323 per product. We have no way...

  19. 75 FR 12466 - Airworthiness Directives; GROB-WERKE (Type Certificate Previously Held by BURKHART GROB Luft- und...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-16

    ... failed canopy jettison test, during a regular maintenance check. The investigation revealed that a cable... received by the closing date and may amend this proposed AD because of those comments. We will post all... provide. We will also post a report summarizing each substantive verbal contact we receive about this...

  20. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    NASA Astrophysics Data System (ADS)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  1. Drowsy driver mobile application: Development of a novel scleral-area detection method.

    PubMed

    Mohammad, Faisal; Mahadas, Kausalendra; Hung, George K

    2017-10-01

    A reliable and practical app for mobile devices was developed to detect driver drowsiness. It consisted of two main components: a Haar cascade classifier, provided by a computer vision framework called OpenCV, for face/eye detection; and a dedicated JAVA software code for image processing that was applied over a masked region circumscribing the eye. A binary threshold was performed over the masked region to provide a quantitative measure of the number of white pixels in the sclera, which represented the state of eye opening. A continuously low white-pixel count would indicate drowsiness, thereby triggering an alarm to alert the driver. This system was successfully implemented on: (1) a static face image, (2) two subjects under laboratory conditions, and (3) a subject in a vehicle environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Adaboost multi-view face detection based on YCgCr skin color model

    NASA Astrophysics Data System (ADS)

    Lan, Qi; Xu, Zhiyong

    2016-09-01

    Traditional Adaboost face detection algorithm uses Haar-like features training face classifiers, whose detection error rate is low in the face region. While under the complex background, the classifiers will make wrong detection easily to the background regions with the similar faces gray level distribution, which leads to the error detection rate of traditional Adaboost algorithm is high. As one of the most important features of a face, skin in YCgCr color space has good clustering. We can fast exclude the non-face areas through the skin color model. Therefore, combining with the advantages of the Adaboost algorithm and skin color detection algorithm, this paper proposes Adaboost face detection algorithm method that bases on YCgCr skin color model. Experiments show that, compared with traditional algorithm, the method we proposed has improved significantly in the detection accuracy and errors.

  3. Research on fusion algorithm of polarization image in tetrolet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing

    2015-12-01

    Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect

  4. Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression

    PubMed Central

    Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander

    2016-01-01

    By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143

  5. Development of an integrated sensor module for a non-invasive respiratory monitoring system

    NASA Astrophysics Data System (ADS)

    Kang, Seok-Won; Chang, Keun-Shik

    2013-09-01

    A respiratory monitoring system has been developed for analyzing the carbon dioxide (CO2) and oxygen (O2) concentrations in the expired air using gas sensors. The data can be used to estimate some medical conditions, including diffusion capability of the lung membrane, oxygen uptake, and carbon dioxide output. For this purpose, a 3-way valve derived from a servomotor was developed, which operates synchronously with human respiratory signals. In particular, the breath analysis system includes an integrated sensor module for valve control, data acquisition through the O2 and CO2 sensors, and respiratory rate monitoring, as well as software dedicated to analysis of respiratory gasses. In addition, an approximation technique for experimental data based on Haar-wavelet-based decomposition is explored to remove noise as well as to reduce the file size of data for long-term monitoring.

  6. From Psychical treatment to psychoanalysis: considerations on the misdating of an early Freud text and on a hitherto overlooked addition to it (here reproduced).

    PubMed

    Fichtner, Gerhard

    2008-08-01

    Freud's early paper Psychical (or mental) treatment, first published in a family reference book for educated lay persons, was reproduced in the Gesammelte Werke with a stated publication date of 1905. This date was subsequently called into question owing to certain parts of the subject-matter (the use of hypnosis and suggestion in 'mental treatment'), and the contribution was erroneously assigned, for instance by James Strachey, to the year 1890. This error is corrected in the present paper. Furthermore, the existence of a second edition of this reference book, which contains an addition to Freud's text and appeared in 1918-19, has previously gone unnoticed. The first edition had been published in 1905-6. Freud's contribution must, however, have been written at an appreciably earlier date. The probable time of its genesis is discussed. Freud's new text is reproduced (in English translation) for the first time in an appendix to this paper.

  7. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  8. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  9. Shareability of correlations in multiqubit states: Optimization of nonlocal monogamy inequalities

    NASA Astrophysics Data System (ADS)

    Batle, J.; Naseri, M.; Ghoranneviss, M.; Farouk, A.; Alkhambashi, M.; Elhoseny, M.

    2017-03-01

    It is a well-known fact that both quantum entanglement and nonlocality (implied by the violation of Bell inequalities) constitute quantum correlations that cannot be arbitrarily shared among subsystems. They are both monogamous, albeit in a different fashion. In the present contribution we focus on nonlocality monogamy relations such as the Toner-Verstraete, the Seevinck, and a derived monogamy inequality for three parties and compare them with multipartite nonlocality measures for the whole set of pure states distributed according to the Haar measure. In this numerical endeavor, we also see that, although monogamy relations for nonlocality cannot exist for more than three parties, in practice the exploration of the whole set of states for different numbers of qubits will return effective bounds on the maximum value of all bipartite Bell violations among subsystems. Hence, we shed light on the effective nonlocality monogamy bounds in the multiqubit case.

  10. Chaos and complexity by design

    DOE PAGES

    Roberts, Daniel A.; Yoshida, Beni

    2017-04-20

    We study the relationship between quantum chaos and pseudorandomness by developing probes of unitary design. A natural probe of randomness is the “frame poten-tial,” which is minimized by unitary k-designs and measures the 2-norm distance between the Haar random unitary ensemble and another ensemble. A natural probe of quantum chaos is out-of-time-order (OTO) four-point correlation functions. We also show that the norm squared of a generalization of out-of-time-order 2k-point correlators is proportional to the kth frame potential, providing a quantitative connection between chaos and pseudorandomness. In addition, we prove that these 2k-point correlators for Pauli operators completely determine the k-foldmore » channel of an ensemble of unitary operators. Finally, we use a counting argument to obtain a lower bound on the quantum circuit complexity in terms of the frame potential. This provides a direct link between chaos, complexity, and randomness.« less

  11. MATLAB for laser speckle contrast analysis (LASCA): a practice-based approach

    NASA Astrophysics Data System (ADS)

    Postnikov, Eugene B.; Tsoy, Maria O.; Postnov, Dmitry E.

    2018-04-01

    Laser Speckle Contrast Analysis (LASCA) is one of the most powerful modern methods for revealing blood dynamics. The experimental design and theory for this method are well established, and the computational recipie is often regarded to be trivial. However, the achieved performance and spatial resolution may considerable differ for different implementations. We comprise a minireview of known approaches to the spatial laser speckle contrast data processing and their realization in MATLAB code providing an explicit correspondence to the mathematical representation, a discussion of available implementations. We also present the algorithm based on the 2D Haar wavelet transform, also supplied with the program code. This new method provides an opportunity to introduce horizontal, vertical and diagonal speckle contrasts; it may be used for processing highly anisotropic images of vascular trees. We provide the comparative analysis of the accuracy of vascular pattern detection and the processing times with a special attention to details of the used MATLAB procedures.

  12. Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles

    NASA Astrophysics Data System (ADS)

    Hannel, Mark D.; Abdulali, Aidan; O'Brien, Michael; Grier, David G.

    2018-06-01

    Holograms of colloidal particles can be analyzed with the Lorenz-Mie theory of light scattering to measure individual particles' three-dimensional positions with nanometer precision while simultaneously estimating their sizes and refractive indexes. Extracting this wealth of information begins by detecting and localizing features of interest within individual holograms. Conventionally approached with heuristic algorithms, this image analysis problem can be solved faster and more generally with machine-learning techniques. We demonstrate that two popular machine-learning algorithms, cascade classifiers and deep convolutional neural networks (CNN), can solve the feature-localization problem orders of magnitude faster than current state-of-the-art techniques. Our CNN implementation localizes holographic features precisely enough to bootstrap more detailed analyses based on the Lorenz-Mie theory of light scattering. The wavelet-based Haar cascade proves to be less precise, but is so computationally efficient that it creates new opportunities for applications that emphasize speed and low cost. We demonstrate its use as a real-time targeting system for holographic optical trapping.

  13. Noninformative prior in the quantum statistical model of pure states

    NASA Astrophysics Data System (ADS)

    Tanaka, Fuyuhiko

    2012-06-01

    In the present paper, we consider a suitable definition of a noninformative prior on the quantum statistical model of pure states. While the full pure-states model is invariant under unitary rotation and admits the Haar measure, restricted models, which we often see in quantum channel estimation and quantum process tomography, have less symmetry and no compelling rationale for any choice. We adopt a game-theoretic approach that is applicable to classical Bayesian statistics and yields a noninformative prior for a general class of probability distributions. We define the quantum detection game and show that there exist noninformative priors for a general class of a pure-states model. Theoretically, it gives one of the ways that we represent ignorance on the given quantum system with partial information. Practically, our method proposes a default distribution on the model in order to use the Bayesian technique in the quantum-state tomography with a small sample.

  14. Stego on FPGA: An IWT Approach

    PubMed Central

    Ramalingam, Balakrishnan

    2014-01-01

    A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA). PMID:24723794

  15. Traffic signs recognition for driving assistance

    NASA Astrophysics Data System (ADS)

    Sai Sangram Reddy, Yatham; Karthik, Devareddy; Rana, Nikunj; Jasmine Pemeena Priyadarsini, M.; Rajini, G. K.; Naseera, Shaik

    2017-11-01

    In the current circumstances with the innovative headway, we must be able to provide assistance to the driving in recognising the traffic signs on the roads. At present time, many reviews are being directed moving in the direction of the usage of a keen Traffic Systems. One field of this exploration is driving support systems, and many reviews are being directed to create frameworks which distinguish and perceive street signs in front of the vehicle, and afterward utilize the data to advise the driver or to even control the vehicle by implementing this system on self-driving vehicles. In this paper we propose a method to detect the traffic sign board in a frame using HAAR cascading and then identifying the sign on it. The output may be either given out in voice or can be displayed as per the driver’s convenience. Each of the Traffic Sign is recognised using a database of images of symbols used to train the KNN classifier using open CV libraries.

  16. A Novel Image Steganography Technique for Secured Online Transaction Using DWT and Visual Cryptography

    NASA Astrophysics Data System (ADS)

    Anitha Devi, M. D.; ShivaKumar, K. B.

    2017-08-01

    Online payment eco system is the main target especially for cyber frauds. Therefore end to end encryption is very much needed in order to maintain the integrity of secret information related to transactions carried online. With access to payment related sensitive information, which enables lot of money transactions every day, the payment infrastructure is a major target for hackers. The proposed system highlights, an ideal approach for secure online transaction for fund transfer with a unique combination of visual cryptography and Haar based discrete wavelet transform steganography technique. This combination of data hiding technique reduces the amount of information shared between consumer and online merchant needed for successful online transaction along with providing enhanced security to customer’s account details and thereby increasing customer’s confidence preventing “Identity theft” and “Phishing”. To evaluate the effectiveness of proposed algorithm Root mean square error, Peak signal to noise ratio have been used as evaluation parameters

  17. Automated corresponding point candidate selection for image registration using wavelet transformation neurla network with rotation invariant inputs and context information about neighboring candidates

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei

    2003-03-01

    An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.

  18. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    PubMed

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  19. A hierarchical preconditioner for the electric field integral equation on unstructured meshes based on primal and dual Haar bases

    NASA Astrophysics Data System (ADS)

    Adrian, S. B.; Andriulli, F. P.; Eibert, T. F.

    2017-02-01

    A new hierarchical basis preconditioner for the electric field integral equation (EFIE) operator is introduced. In contrast to existing hierarchical basis preconditioners, it works on arbitrary meshes and preconditions both the vector and the scalar potential within the EFIE operator. This is obtained by taking into account that the vector and the scalar potential discretized with loop-star basis functions are related to the hypersingular and the single layer operator (i.e., the well known integral operators from acoustics). For the single layer operator discretized with piecewise constant functions, a hierarchical preconditioner can easily be constructed. Thus the strategy we propose in this work for preconditioning the EFIE is the transformation of the scalar and the vector potential into operators equivalent to the single layer operator and to its inverse. More specifically, when the scalar potential is discretized with star functions as source and testing functions, the resulting matrix is a single layer operator discretized with piecewise constant functions and multiplied left and right with two additional graph Laplacian matrices. By inverting these graph Laplacian matrices, the discretized single layer operator is obtained, which can be preconditioned with the hierarchical basis. Dually, when the vector potential is discretized with loop functions, the resulting matrix can be interpreted as a hypersingular operator discretized with piecewise linear functions. By leveraging on a scalar Calderón identity, we can interpret this operator as spectrally equivalent to the inverse single layer operator. Then we use a linear-in-complexity, closed-form inverse of the dual hierarchical basis to precondition the hypersingular operator. The numerical results show the effectiveness of the proposed preconditioner and the practical impact of theoretical developments in real case scenarios.

  20. Color matrix display simulation based upon luminance and chromatic contrast sensitivity of early vision

    NASA Technical Reports Server (NTRS)

    Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.

    1992-01-01

    This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.

  1. New machine-learning algorithms for prediction of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Mandal, Indrajit; Sairam, N.

    2014-03-01

    This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.

  2. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    NASA Astrophysics Data System (ADS)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  3. Study on Low Illumination Simultaneous Polarization Image Registration Based on Improved SURF Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Wanjun; Yang, Xu

    2017-12-01

    Registration of simultaneous polarization images is the premise of subsequent image fusion operations. However, in the process of shooting all-weather, the polarized camera exposure time need to be kept unchanged, sometimes polarization images under low illumination conditions due to too dark result in SURF algorithm can not extract feature points, thus unable to complete the registration, therefore this paper proposes an improved SURF algorithm. Firstly, the luminance operator is used to improve overall brightness of low illumination image, and then create integral image, using Hession matrix to extract the points of interest to get the main direction of characteristic points, calculate Haar wavelet response in X and Y directions to get the SURF descriptor information, then use the RANSAC function to make precise matching, the function can eliminate wrong matching points and improve accuracy rate. And finally resume the brightness of the polarized image after registration, the effect of the polarized image is not affected. Results show that the improved SURF algorithm can be applied well under low illumination conditions.

  4. Iris double recognition based on modified evolutionary neural network

    NASA Astrophysics Data System (ADS)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  5. Online anomaly detection in wireless body area networks for reliable healthcare monitoring.

    PubMed

    Salem, Osman; Liu, Yaning; Mehaoua, Ahmed; Boutaba, Raouf

    2014-09-01

    In this paper, we propose a lightweight approach for online detection of faulty measurements by analyzing the data collected from medical wireless body area networks. The proposed framework performs sequential data analysis using a smart phone as a base station, and takes into account the constrained resources of the smart phone, such as processing power and storage capacity. The main objective is to raise alarms only when patients enter in an emergency situation, and to discard false alarms triggered by faulty measurements or ill-behaved sensors. The proposed approach is based on the Haar wavelet decomposition, nonseasonal Holt-Winters forecasting, and the Hampel filter for spatial analysis, and on for temporal analysis. Our objective is to reduce false alarms resulting from unreliable measurements and to reduce unnecessary healthcare intervention. We apply our proposed approach on real physiological dataset. Our experimental results prove the effectiveness of our approach in achieving good detection accuracy with a low false alarm rate. The simplicity and the processing speed of our proposed framework make it useful and efficient for real time diagnosis.

  6. Directional filtering for block recovery using wavelet features

    NASA Astrophysics Data System (ADS)

    Hyun, Seung H.; Eom, Il K.; Kim, Yoo S.

    2005-07-01

    When images compressed with block-based compression techniques are transmitted over a noisy channel, unexpected block losses occur. Conventional methods that do not consider edge directions can cause blocked blurring artifacts. In this paper, we present a post-processing-based block recovery scheme using Haar wavelet features. The adaptive selection of neighboring blocks is performed based on the energy of wavelet subbands (EWS) and difference between DC values (DDC). The lost blocks are recovered by linear interpolation in the spatial domain using selected blocks. The method using only EWS performs well for horizontal and vertical edges, but not as well for diagonal edges. Conversely, only using DDC performs well for diagonal edges with the exception of line- or roof-type edge profiles. Therefore, we combine EWS and DDC for better results. The proposed directional recovery method is effective for the strong edge because exploit the varying neighboring blocks adaptively according to the edges and the directional information in the image. The proposed method outperforms the previous methods that used only fixed blocks.

  7. Generalized Entanglement Entropies of Quantum Designs.

    PubMed

    Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun

    2018-03-30

    The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.

  8. Multi-scale streamflow variability responses to precipitation over the headwater catchments in southern China

    NASA Astrophysics Data System (ADS)

    Niu, Jun; Chen, Ji; Wang, Keyi; Sivakumar, Bellie

    2017-08-01

    This paper examines the multi-scale streamflow variability responses to precipitation over 16 headwater catchments in the Pearl River basin, South China. The long-term daily streamflow data (1952-2000), obtained using a macro-scale hydrological model, the Variable Infiltration Capacity (VIC) model, and a routing scheme, are studied. Temporal features of streamflow variability at 10 different timescales, ranging from 6 days to 8.4 years, are revealed with the Haar wavelet transform. The principal component analysis (PCA) is performed to categorize the headwater catchments with the coherent modes of multi-scale wavelet spectra. The results indicate that three distinct modes, with different variability distributions at small timescales and seasonal scales, can explain 95% of the streamflow variability. A large majority of the catchments (i.e. 12 out of 16) exhibit consistent mode feature on multi-scale variability throughout three sub-periods (1952-1968, 1969-1984, and 1985-2000). The multi-scale streamflow variability responses to precipitation are identified to be associated with the regional flood and drought tendency over the headwater catchments in southern China.

  9. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    PubMed Central

    Cho, Nam-Hoon; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701

  10. Generalized Entanglement Entropies of Quantum Designs

    NASA Astrophysics Data System (ADS)

    Liu, Zi-Wen; Lloyd, Seth; Zhu, Elton Yechao; Zhu, Huangjun

    2018-03-01

    The entanglement properties of random quantum states or dynamics are important to the study of a broad spectrum of disciplines of physics, ranging from quantum information to high energy and many-body physics. This Letter investigates the interplay between the degrees of entanglement and randomness in pure states and unitary channels. We reveal strong connections between designs (distributions of states or unitaries that match certain moments of the uniform Haar measure) and generalized entropies (entropic functions that depend on certain powers of the density operator), by showing that Rényi entanglement entropies averaged over designs of the same order are almost maximal. This strengthens the celebrated Page's theorem. Moreover, we find that designs of an order that is logarithmic in the dimension maximize all Rényi entanglement entropies and so are completely random in terms of the entanglement spectrum. Our results relate the behaviors of Rényi entanglement entropies to the complexity of scrambling and quantum chaos in terms of the degree of randomness, and suggest a generalization of the fast scrambling conjecture.

  11. Wavelet analysis of poorly-focused ultrasonic signal of pressure tube inspection in nuclear industry

    NASA Astrophysics Data System (ADS)

    Zhao, Huan; Gachagan, Anthony; Dobie, Gordon; Lardner, Timothy

    2018-04-01

    Pressure tube fabrication and installment challenges combined with natural sagging over time can produce issues with probe alignment for pressure tube inspection of the primary circuit of CANDU reactors. The ability to extract accurate defect depth information from poorly focused ultrasonic signals would reduce additional inspection procedures, which leads to a significant time and cost saving. Currently, the defect depth measurement protocol is to simply calculate the time difference between the peaks of the echo signals from the tube surface and the defect from a single element probe focused at the back-wall depth. When alignment issues are present, incorrect focusing results in interference within the returning echo signal. This paper proposes a novel wavelet analysis method that employs the Haar wavelet to decompose the original poorly focused A-scan signal and reconstruct detailed information based on a selected high frequency component range within the bandwidth of the transducer. Compared to the original signal, the wavelet analysis method provides additional characteristic defect information and an improved estimate of defect depth with errors less than 5%.

  12. Arzt und Hobby-Astronom in stürmischen Zeiten Der Büchernachlass des Doktor Johannes Häringshauser, Viertelsmedicus in Mistelbach (1630-1641) in der Melker Stiftsbibliothek.

    NASA Astrophysics Data System (ADS)

    Davison, Giles; Glaßner, Gottfried

    2009-06-01

    Auf der Suche nach astronomischer Literatur stieß Giles Davison in der Melker Stiftsbibliothek auf den Namen "Doctor Johannes Häringshauser“ als Besitzer seltener und interessanter astronomischer Werke u.a. von Johannes Regiomontan, Georg von Peuerbach, Michael Mästlin, Johannes Kepler und Daniel Sennert. Weitere in den Jahren 2007-2009 durchgeführte Nachforschungen ergaben, dass es sich um den von 1630-1641 in Mistelbach, Niederösterreich, als Landschaftsarzt tätigen Vater des Melker Konventualen und Bibliothekars Sigismund Häringshauser (1631-1698) handelt. Er wurde 1603 als Sohn des aus Magdeburg stammenden Apothekers Johannes Häringshauser geboren und starb 1642 in Mistelbach. Johannes Häringshauser Sen. bekleidete von 1613-1640 eine Reihe wichtiger Ämter in der Wiener Stadtregierung und starb 1647. Der Studienaufenthalt von Dr. Johannes Häringshauser Jun. in Padua (1624-1626) dürfte das Interesse für Astronomie geweckt haben, das sich in seiner in die Bestände der Melker Stiftsbibliothek eingegangenen Privatbibliothek widerspiegelt. Der Großteil der 10 dem Fachbereich der Astronomie und Astrologie zuzuweisenden Titel wurde von ihm in den Jahren 1636 und 1637 erworben.

  13. Hazard characterization and identification of a former ammunition site using microarrays, bioassays, and chemical analysis.

    PubMed

    Eisentraeger, Adolf; Reifferscheid, Georg; Dardenne, Freddy; Blust, Ronny; Schofer, Andrea

    2007-04-01

    More than 100,000 tons of 2,4,6-trinitrotoluene were produced at the former ammunition site Werk Tanne in Clausthal-Zellerfeld, Germany. The production of explosives and consequent detonation in approximately 1944 by the Allies caused great pollution in this area. Four soil samples and three water samples were taken from this site and characterized by applying chemical-analytical methods and several bioassays. Ecotoxicological test systems, such as the algal growth inhibition assay with Desmodesmus subspicatus, and genotoxicity tests, such as the umu and NM2009 tests, were performed. Also applied were the Ames test, according to International Organization for Standardization 16240, and an Ames fluctuation test. The toxic mode of action was examined using bacterial gene profiling assays with a battery of Escherichia coli strains and with the human liver cell line hepG2 using the PIQOR Toxicology cDNA microarray. Additionally, the molecular mechanism of 2,4,6-trinitrotoluene in hepG2 cells was analyzed. The present assessment indicates a danger of pollutant leaching for the soil-groundwater path. A possible impact for human health is discussed, because the groundwater in this area serves as drinking water.

  14. Reverse osmosis followed by activated carbon filtration for efficient removal of organic micropollutants from river bank filtrate.

    PubMed

    Kegel, F Schoonenberg; Rietman, B M; Verliefde, A R D

    2010-01-01

    Drinking water utilities in Europe are faced with a growing presence of organic micropollutants in their water sources. The aim of this research was to assess the robustness of a drinking water treatment plant equipped with reverse osmosis and subsequent activated carbon filtration for the removal of these pollutants. The total removal efficiency of 47 organic micropollutants was investigated. Results indicated that removal of most organic micropollutants was high for all membranes tested. Some selected micropollutants were less efficiently removed (e.g. the small and polar NDMA and glyphosate, and the more hydrophobic ethylbenzene and napthalene). Very high removal efficiencies for almost all organic micropollutants by the subsequent activated carbon, fed with the permeate stream of the RO element were observed except for the very small and polar NDMA and 1,4-dioxane. RO and subsequent activated carbon filtration are complementary and their combined application results in the removal of a large part of these emerging organic micropollutants. Based on these experiments it can be concluded that the robustness of a proposed treatment scheme for the drinking water treatment plant Engelse Werk is sufficiently guaranteed.

  15. Vertical Drop Testing and Analysis of the Wasp Helicopter Skid Gear

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fuchs, Yvonne T.

    2007-01-01

    This report describes an experimental program to assess the impact performance of a skid gear for use on the Wasp kit-built helicopter, which is marketed by HeloWerks, Inc. of Hampton, Virginia. In total, five vertical drop tests were performed. The test article consisted of a skid gear mounted beneath a steel plate. A seating platform was attached to the upper surface of the steel plate, and two 95th percentile Hybrid III male Anthropomorphic Test Devices (ATDs) were seated on the platform and secured using a four-point restraint system. The test article also included ballast weights to ensure the correct position of the Center-of-Gravity (CG). Twenty-six channels of acceleration data were collected per test at 50,000 samples per second. The five drop tests were conducted on two different gear configurations. The details of these test programs are presented, as well as an occupant injury assessment. Finally, a finite element model of the skid gear test article was developed for execution in LS-DYNA, an explicit nonlinear transient dynamic code, for predicting the skid gear and occupant dynamic responses due to impact.

  16. Michael Gottlieb Hansch (1683 - 1749), Ulrich Junius (1670 - 1726) und der Versuch einer Edition der Werke und Briefe Johannes Keplers.

    NASA Astrophysics Data System (ADS)

    Döring, D.

    At the beginning of the 18th century U. Junius tried unsuccessfully to collect and publish the most important manuscripts of Johannes Kepler. M. G. Hansch pursued this plan until the end of his life. The result was only one volume with unpublished letters which appeared in 1718.

  17. 75 FR 23194 - Airworthiness Directives; GROB-WERKE GMBH & CO KG Models G102 ASTIR CS and G102 STANDARD ASTIR...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-03

    ... & CO KG Models G102 ASTIR CS and G102 STANDARD ASTIR III Gliders AGENCY: Federal Aviation.... Affected ADs (b) None. Applicability (c) This AD applies to Models G102 ASTIR CS and G102 STANDARD ASTIR..., Standards Office, FAA, has the authority to approve AMOCs for this AD, if requested using the procedures...

  18. 75 FR 47182 - Airworthiness Directives; GROB-WERKE GMBH & CO KG Models G102 ASTIR CS and G102 STANDARD ASTIR...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ... fuselage tail which could cause a displacement of the sailplane centre of gravity and consequently may lead... tanks could run down into the fuselage and fuselage tail which could cause a displacement of the... tanks could run down into the fuselage and fuselage tail which could cause a displacement of the...

  19. Visual difference metric for realistic image synthesis

    NASA Astrophysics Data System (ADS)

    Bolin, Mark R.; Meyer, Gary W.

    1999-05-01

    An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.

  20. Detection of molecular particles in live cells via machine learning.

    PubMed

    Jiang, Shan; Zhou, Xiaobo; Kirchhausen, Tom; Wong, Stephen T C

    2007-08-01

    Clathrin-coated pits play an important role in removing proteins and lipids from the plasma membrane and transporting them to the endosomal compartment. It is, however, still unclear whether there exist "hot spots" for the formation of Clathrin-coated pits or the pits and arrays formed randomly on the plasma membrane. To answer this question, first of all, many hundreds of individual pits need to be detected accurately and separated in live-cell microscope movies to capture and monitor how pits and vesicles were formed. Because of the noisy background and the low contrast of the live-cell movies, the existing image analysis methods, such as single threshold, edge detection, and morphological operation, cannot be used. Thus, this paper proposes a machine learning method, which is based on Haar features, to detect the particle's position. Results show that this method can successfully detect most of particles in the image. In order to get the accurate boundaries of these particles, several post-processing methods are applied and signal-to-noise ratio analysis is also performed to rule out the weak spots. Copyright 2007 International Society for Analytical Cytology.

  1. Multi-vehicle detection with identity awareness using cascade Adaboost and Adaptive Kalman filter for driver assistant system.

    PubMed

    Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng

    2017-01-01

    Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness.

  2. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  3. From tiger to panda: animal head detection.

    PubMed

    Zhang, Weiwei; Sun, Jian; Tang, Xiaoou

    2011-06-01

    Robust object detection has many important applications in real-world online photo processing. For example, both Google image search and MSN live image search have integrated human face detector to retrieve face or portrait photos. Inspired by the success of such face filtering approach, in this paper, we focus on another popular online photo category--animal, which is one of the top five categories in the MSN live image search query log. As a first attempt, we focus on the problem of animal head detection of a set of relatively large land animals that are popular on the internet, such as cat, tiger, panda, fox, and cheetah. First, we proposed a new set of gradient oriented feature, Haar of Oriented Gradients (HOOG), to effectively capture the shape and texture features on animal head. Then, we proposed two detection algorithms, namely Bruteforce detection and Deformable detection, to effectively exploit the shape feature and texture feature simultaneously. Experimental results on 14,379 well labeled animals images validate the superiority of the proposed approach. Additionally, we apply the animal head detector to improve the image search result through text based online photo search result filtering.

  4. Exploratory study and application of the angular wavelet analysis for assessing the spatial distribution of breakdown spots in Pt/HfO2/Pt structures

    NASA Astrophysics Data System (ADS)

    Muñoz-Gorriz, J.; Monaghan, S.; Cherkaoui, K.; Suñé, J.; Hurley, P. K.; Miranda, E.

    2017-12-01

    The angular wavelet analysis is applied for assessing the spatial distribution of breakdown spots in Pt/HfO2/Pt capacitors with areas ranging from 104 to 105 μm2. The breakdown spot lateral sizes are in the range from 1 to 3 μm, and they appear distributed on the top metal electrode as a point pattern. The spots are generated by ramped and constant voltage stresses and are the consequence of microexplosions caused by the formation of shorts spanning the dielectric film. This kind of pattern was analyzed in the past using the conventional spatial analysis tools such as intensity plots, distance histograms, pair correlation function, and nearest neighbours. Here, we show that the wavelet analysis offers an alternative and complementary method for testing whether or not the failure site distribution departs from a complete spatial randomness process in the angular domain. The effect of using different wavelet functions, such as the Haar, Sine, French top hat, Mexican hat, and Morlet, as well as the roles played by the process intensity, the location of the voltage probe, and the aspect ratio of the device, are all discussed.

  5. A Modified Sparse Representation Method for Facial Expression Recognition

    PubMed Central

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  6. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  7. Real-time people and vehicle detection from UAV imagery

    NASA Astrophysics Data System (ADS)

    Gaszczak, Anna; Breckon, Toby P.; Han, Jiwan

    2011-01-01

    A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.

  8. Analysis of dual-frequency MEMS antenna using H-MRTD method

    NASA Astrophysics Data System (ADS)

    Yu, Wenge; Zhong, Xianxin; Chen, Yu; Wu, Zhengzhong

    2004-10-01

    For applying micro/nano technologies and Micro-Electro-Mechanical System (MEMS) technologies in the Radio Frequency (RF) field to manufacture miniature microstrip antennas. A novel MEMS dual-band patch antenna designed using slot-loaded and short-circuited size-reduction techniques is presented in this paper. By controlling the short-plane width, the two resonant frequencies, f10 and f30, can be significantly reduced and the frequency ratio (f30/f10) is tunable in the range 1.7~2.3. The Haar-Wavelet-Based multiresolution time domain (H-MRTD) with compactly supported scaling function for a full three-dimensional (3-D) wave to Yee's staggered cell is used for modeling and analyzing the antenna for the first time. Associated with practical model, an uniaxial perfectly matched layer (UPML) absorbing boundary conditions was developed, In addition , extending the mathematical formulae to an inhomogenous media. Numerical simulation results are compared with those using the conventional 3-D finite-difference time-domain (FDTD) method and measured. It has been demonstrated that, with this technique, space discretization with only a few cells per wavelength gives accurate results, leading to a reduction of both memory requirement and computation time.

  9. Free Fermions and the Classical Compact Groups

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  10. Response of Autonomic Nervous System to Body Positions:

    NASA Astrophysics Data System (ADS)

    Xu, Aiguo; Gonnella, G.; Federici, A.; Stramaglia, S.; Simone, F.; Zenzola, A.; Santostasi, R.

    Two mathematical methods, the Fourier and wavelet transforms, were used to study the short term cardiovascular control system. Time series, picked from electrocardiogram and arterial blood pressure lasting 6 minutes, were analyzed in supine position (SUP), during the first (HD1) and the second parts (HD2) of 90° head down tilt, and during recovery (REC). The wavelet transform was performed using the Haar function of period T=2j (j=1,2,...,6) to obtain wavelet coefficients. Power spectra components were analyzed within three bands, VLF (0.003-0.04), LF (0.04-0.15) and HF (0.15-0.4) with the frequency unit cycle/interval. Wavelet transform demonstrated a higher discrimination among all analyzed periods than the Fourier transform. For the Fourier analysis, the LF of R-R intervals and VLF of systolic blood pressure show more evident difference for different body positions. For the wavelet analysis, the systolic blood pressures show much more evident differences than the R-R intervals. This study suggests a difference in the response of the vessels and the heart to different body positions. The partial dissociation between VLF and LF results is a physiologically relevant finding of this work.

  11. Multi-vehicle detection with identity awareness using cascade Adaboost and Adaptive Kalman filter for driver assistant system

    PubMed Central

    Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng

    2017-01-01

    Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness. PMID:28296902

  12. Application of wavelet-based multi-model Kalman filters to real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Chou, Chien-Ming; Wang, Ru-Yih

    2004-04-01

    This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.

  13. Applying wavelet transforms to analyse aircraft-measured turbulence and turbulent fluxes in the atmospheric boundary layer over eastern Siberia

    NASA Astrophysics Data System (ADS)

    Strunin, M. A.; Hiyama, T.

    2004-11-01

    The wavelet spectral method was applied to aircraft-based measurements of atmospheric turbulence obtained during joint Russian-Japanese research on the atmospheric boundary layer near Yakutsk (eastern Siberia) in April-June 2000. Practical ways to apply Fourier and wavelet methods for aircraft-based turbulence data are described. Comparisons between Fourier and wavelet transform results are shown and they demonstrate, in conjunction with theoretical and experimental restrictions, that the Fourier transform method is not useful for studying non-homogeneous turbulence. The wavelet method is free from many disadvantages of Fourier analysis and can yield more informative results. Comparison of Fourier and Morlet wavelet spectra showed good agreement at high frequencies (small scales). The quality of the wavelet transform and corresponding software was estimated by comparing the original data with restored data constructed with an inverse wavelet transform. A Haar wavelet basis was inappropriate for the turbulence data; the mother wavelet function recommended in this study is the Morlet wavelet. Good agreement was also shown between variances and covariances estimated with different mathematical techniques, i.e. through non-orthogonal wavelet spectra and through eddy correlation methods.

  14. Lower bounds on the violation of the monogamy inequality for quantum correlation measures

    NASA Astrophysics Data System (ADS)

    Kumar, Asutosh; Dhar, Himadri Shekhar

    2016-06-01

    In multiparty quantum systems, the monogamy inequality proposes an upper bound on the distribution of bipartite quantum correlation between a single party and each of the remaining parties in the system, in terms of the amount of quantum correlation shared by that party with the rest of the system taken as a whole. However, it is well known that not all quantum correlation measures universally satisfy the monogamy inequality. In this work, we aim at determining the nontrivial value by which the monogamy inequality can be violated by a quantum correlation measure. Using an information-theoretic complementarity relation between the normalized purity and quantum correlation in any given multiparty state, we obtain a nontrivial lower bound on the negative monogamy score for the quantum correlation measure. In particular, for the three-qubit states the lower bound is equal to the negative von Neumann entropy of the single qubit reduced density matrix. We analytically examine the tightness of the derived lower bound for certain n -qubit quantum states. Further, we report numerical results of the same for monogamy violating correlation measures using Haar uniformly generated three-qubit states.

  15. An improved real time image detection system for elephant intrusion along the forest border areas.

    PubMed

    Sugumar, S J; Jayaparvathy, R

    2014-01-01

    Human-elephant conflict is a major problem leading to crop damage, human death and injuries caused by elephants, and elephants being killed by humans. In this paper, we propose an automated unsupervised elephant image detection system (EIDS) as a solution to human-elephant conflict in the context of elephant conservation. The elephant's image is captured in the forest border areas and is sent to a base station via an RF network. The received image is decomposed using Haar wavelet to obtain multilevel wavelet coefficients, with which we perform image feature extraction and similarity match between the elephant query image and the database image using image vision algorithms. A GSM message is sent to the forest officials indicating that an elephant has been detected in the forest border and is approaching human habitat. We propose an optimized distance metric to improve the image retrieval time from the database. We compare the optimized distance metric with the popular Euclidean and Manhattan distance methods. The proposed optimized distance metric retrieves more images with lesser retrieval time than the other distance metrics which makes the optimized distance method more efficient and reliable.

  16. A predictive multi-linear regression model for organic micropollutants, based on a laboratory-scale column study simulating the river bank filtration process.

    PubMed

    Bertelkamp, C; Verliefde, A R D; Reynisson, J; Singhal, N; Cabo, A J; de Jonge, M; van der Hoek, J P

    2016-03-05

    This study investigated relationships between OMP biodegradation rates and the functional groups present in the chemical structure of a mixture of 31 OMPs. OMP biodegradation rates were determined from lab-scale columns filled with soil from RBF site Engelse Werk of the drinking water company Vitens in The Netherlands. A statistically significant relationship was found between OMP biodegradation rates and the functional groups of the molecular structures of OMPs in the mixture. The OMP biodegradation rate increased in the presence of carboxylic acids, hydroxyl groups, and carbonyl groups, but decreased in the presence of ethers, halogens, aliphatic ethers, methyl groups and ring structures in the chemical structure of the OMPs. The predictive model obtained from the lab-scale soil column experiment gave an accurate qualitative prediction of biodegradability for approximately 70% of the OMPs monitored in the field (80% excluding the glymes). The model was found to be less reliable for the more persistent OMPs (OMPs with predicted biodegradation rates lower or around the standard error=0.77d(-1)) and OMPs containing amide or amine groups. These OMPs should be carefully monitored in the field to determine their removal during RBF. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. How scaling fluctuation analyses can transform our view of the climate

    NASA Astrophysics Data System (ADS)

    Lovejoy, Shaun; Schertzer, Daniel

    2013-04-01

    There exist a bewildering diversity of proxy climate data including tree rings, ice cores, lake varves, boreholes, ice cores, pollen, foraminifera, corals and speleothems. Their quantitative use raises numerous questions of interpretation and calibration. Even in classical cases - such as the isotope signal in ice cores - the usual assumption of linear dependence on ambient temperature is only a first approximation. In other cases - such as speleothems - the isotope signals arise from multiple causes (which are not always understood) and this hinders their widespread use. We argue that traditional interpretations and calibrations - based on essentially deterministic comparisons between instrumental data, model outputs and proxies (albeit with the help of uncertainty analyses) - have been both overly ambitious while simultaneously underexploiting the data. The former since comparisons typically involve series at different temporal resolutions and from different geographical locations - one does not expect agreement in a deterministic sense, while with respect to climate models, one only expects statistical correspondences. The proxies are underexploited since comparisons are done at unique temporal and / or spatial resolutions whereas the fluctuations they describe provide information over wide ranges of scale. A convenient method of overcoming these difficulties is the use of fluctuation analysis systematically applied over the full range of available scales to determine the scaling proeprties. The new transformative element presented here, is to define fluctuations ΔT in a series T(t) at scale Δt not by differences (ΔT(Δt) = T(t+Δt) - T(t)) but rather by the difference in the means over the first and second halves of the lag Δt . This seemingly minor change - technically from "poor man's" to "Haar" wavelets - turns out to make a huge difference since for example, it is adequate for analysing temperatures from seconds to hundreds of millions of years yet

  18. 76 FR 1342 - Airworthiness Directives; GROB-WERKE GMBH & CO KG Models G102 ASTIR CS, G102 CLUB ASTIR III, G102...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-10

    ... Stobbe, and the Soaring Society of America (SSA) cite lack of rationale to mandate the unconditional..., February 26, 1979); and (3) Will not have a significant economic impact, positive or negative, on a...

  19. Zur Rolle von Plansprachen im terminologiewissenschaftlichen Werk von Eugen Wuster (The Role of Planned Languages in Eugen Wuster's Work on Terminology Science).

    ERIC Educational Resources Information Center

    Blanke, Detlev

    1998-01-01

    Discusses the relationship between planned languages and specialized technical languages, with particular reference to Esperanto, and analyzes its significance for several aspects of Eugen Wuster's (the founder of terminology science) work. (Author/VWL)

  20. An on-board pedestrian detection and warning system with features of side pedestrian

    NASA Astrophysics Data System (ADS)

    Cheng, Ruzhong; Zhao, Yong; Wong, ChupChung; Chan, KwokPo; Xu, Jiayao; Wang, Xin'an

    2012-01-01

    Automotive Active Safety(AAS) is the main branch of intelligence automobile study and pedestrian detection is the key problem of AAS, because it is related with the casualties of most vehicle accidents. For on-board pedestrian detection algorithms, the main problem is to balance efficiency and accuracy to make the on-board system available in real scenes, so an on-board pedestrian detection and warning system with the algorithm considered the features of side pedestrian is proposed. The system includes two modules, pedestrian detecting and warning module. Haar feature and a cascade of stage classifiers trained by Adaboost are first applied, and then HOG feature and SVM classifier are used to refine false positives. To make these time-consuming algorithms available in real-time use, a divide-window method together with operator context scanning(OCS) method are applied to increase efficiency. To merge the velocity information of the automotive, the distance of the detected pedestrian is also obtained, so the system could judge if there is a potential danger for the pedestrian in the front. With a new dataset captured in urban environment with side pedestrians on zebra, the embedded system and its algorithm perform an on-board available result on side pedestrian detection.

  1. Generalized exact holographic mapping with wavelets

    NASA Astrophysics Data System (ADS)

    Lee, Ching Hua

    2017-12-01

    The idea of renormalization and scale invariance is pervasive across disciplines. It has not only drawn numerous surprising connections between physical systems under the guise of holographic duality, but has also inspired the development of wavelet theory now widely used in signal processing. Synergizing on these two developments, we describe in this paper a generalized exact holographic mapping that maps a generic N -dimensional lattice system to a (N +1 )-dimensional holographic dual, with the emergent dimension representing scale. In previous works, this was achieved via the iterations of the simplest of all unitary mappings, the Haar mapping, which fails to preserve the form of most Hamiltonians. By taking advantage of the full generality of biorthogonal wavelets, our new generalized holographic mapping framework is able to preserve the form of a large class of lattice Hamiltonians. By explicitly separating features that are fundamentally associated with the physical system from those that are basis specific, we also obtain a clearer understanding of how the resultant bulk geometry arises. For instance, the number of nonvanishing moments of the high-pass wavelet filter is revealed to be proportional to the radius of the dual anti-de Sitter space geometry. We conclude by proposing modifications to the mapping for systems with generic Fermi pockets.

  2. Representation learning: a unified deep learning framework for automatic prostate MR segmentation.

    PubMed

    Liao, Shu; Gao, Yaozong; Oto, Aytekin; Shen, Dinggang

    2013-01-01

    Image representation plays an important role in medical image analysis. The key to the success of different medical image analysis algorithms is heavily dependent on how we represent the input data, namely features used to characterize the input image. In the literature, feature engineering remains as an active research topic, and many novel hand-crafted features are designed such as Haar wavelet, histogram of oriented gradient, and local binary patterns. However, such features are not designed with the guidance of the underlying dataset at hand. To this end, we argue that the most effective features should be designed in a learning based manner, namely representation learning, which can be adapted to different patient datasets at hand. In this paper, we introduce a deep learning framework to achieve this goal. Specifically, a stacked independent subspace analysis (ISA) network is adopted to learn the most effective features in a hierarchical and unsupervised manner. The learnt features are adapted to the dataset at hand and encode high level semantic anatomical information. The proposed method is evaluated on the application of automatic prostate MR segmentation. Experimental results show that significant segmentation accuracy improvement can be achieved by the proposed deep learning method compared to other state-of-the-art segmentation approaches.

  3. Repeatability and Reliability Characterization of Phonocardiograph Systems Using Wavelet and Backpropagation Neural Network

    NASA Astrophysics Data System (ADS)

    Sumarna; Astono, J.; Purwanto, A.; Agustika, D. K.

    2018-04-01

    Phonocardiograph (PCG) system consisting of an electronic stethoscope, mic condenser, mic preamp, and the battery has been developed. PCG system is used to detect heart abnormalities. Although PCG is not popular because of many things that affect its performance, in this research we try to reduce the factors that affecting its consistency To find out whether the system is repeatable and reliable the system have to be characterized first. This research aims to see whether the PCG system can provide the same results for measurements of the same patient. Characterization of the system is done by analyzing whether the PCG system can recognize the S1 and S2 part of the same person. From the recording result, S1 and S2 then transformed by using Discrete Wavelet Transform of Haar mother wavelet of level 1 and extracted the feature by using data range of approximation coefficients. The result was analyzed by using pattern recognition system of backpropagation neural network. Partially obtained data used as training data and partly used as test data. From the results of the pattern recognition system, it can be concluded that the system accuracy in recognizing S1 reach 87.5% and S2 only hit 67%.

  4. Multimode waveguide speckle patterns for compressive sensing.

    PubMed

    Valley, George C; Sefler, George A; Justin Shaw, T

    2016-06-01

    Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.

  5. A multispectral automatic target recognition application for maritime surveillance, search, and rescue

    NASA Astrophysics Data System (ADS)

    Schoonmaker, Jon; Reed, Scott; Podobna, Yuliya; Vazquez, Jose; Boucher, Cynthia

    2010-04-01

    Due to increased security concerns, the commitment to monitor and maintain security in the maritime environment is increasingly a priority. A country's coast is the most vulnerable area for the incursion of illegal immigrants, terrorists and contraband. This work illustrates the ability of a low-cost, light-weight, multi-spectral, multi-channel imaging system to handle the environment and see under difficult marine conditions. The system and its implemented detecting and tracking technologies should be organic to the maritime homeland security community for search and rescue, fisheries, defense, and law enforcement. It is tailored for airborne and ship based platforms to detect, track and monitor suspected objects (such as semi-submerged targets like marine mammals, vessels in distress, and drug smugglers). In this system, automated detection and tracking technology is used to detect, classify and localize potential threats or objects of interest within the imagery provided by the multi-spectral system. These algorithms process the sensor data in real time, thereby providing immediate feedback when features of interest have been detected. A supervised detection system based on Haar features and Cascade Classifiers is presented and results are provided on real data. The system is shown to be extendable and reusable for a variety of different applications.

  6. Storage and recycling of water and carbon dioxide in the earth

    NASA Technical Reports Server (NTRS)

    Wood, Bernard J.

    1994-01-01

    The stabilities and properties of water- and carbon-bearing phases in the earth have been determined from phase equilibrium measurements, combined with new data on the equations of state of water, carbon dioxide, carbonates and hydrates. The data have then been used to predict the fate of calcite and hydrous phases in subducting oceanic lithosphere. From the compositions of MORB's one can estimate concentrations of water and carbon of around 200 ppm and 80 ppm respectively in the upper mantle. Lower mantle estimates are very uncertain, but 1900 ppm water and 2000 ppm C are plausible concentrations. Measurements of the density of supercritical water to 3 GPa demonstrate that this phase is less compressible than anticipated from the equations of state of Haar et al. or Saul and Wagner and is closer to predictions based on molecular dynamics simulations. Conversely, fugacity measurements on carbon dioxide to 7 GPa show that this fluid is more compressible than predicted from the MRK equation of state. The results imply that hydrates are relatively more stable and carbonates less stable at pressures greater than 5 GPa than would be predicted from simple extrapolation of the low pressure data. Nevertheless, carbonates remain extremely refractory phases within both the upper and lower mantle.

  7. Detection of segments with fetal QRS complex from abdominal maternal ECG recordings using support vector machine

    NASA Astrophysics Data System (ADS)

    Delgado, Juan A.; Altuve, Miguel; Nabhan Homsi, Masun

    2015-12-01

    This paper introduces a robust method based on the Support Vector Machine (SVM) algorithm to detect the presence of Fetal QRS (fQRS) complexes in electrocardiogram (ECG) recordings provided by the PhysioNet/CinC challenge 2013. ECG signals are first segmented into contiguous frames of 250 ms duration and then labeled in six classes. Fetal segments are tagged according to the position of fQRS complex within each one. Next, segment features extraction and dimensionality reduction are obtained by applying principal component analysis on Haar-wavelet transform. After that, two sub-datasets are generated to separate representative segments from atypical ones. Imbalanced class problem is dealt by applying sampling without replacement on each sub-dataset. Finally, two SVMs are trained and cross-validated using the two balanced sub-datasets separately. Experimental results show that the proposed approach achieves high performance rates in fetal heartbeats detection that reach up to 90.95% of accuracy, 92.16% of sensitivity, 88.51% of specificity, 94.13% of positive predictive value and 84.96% of negative predictive value. A comparative study is also carried out to show the performance of other two machine learning algorithms for fQRS complex estimation, which are K-nearest neighborhood and Bayesian network.

  8. Robust prediction of three-dimensional spinal curve from back surface for non-invasive follow-up of scoliosis

    NASA Astrophysics Data System (ADS)

    Bergeron, Charles; Labelle, Hubert; Ronsky, Janet; Zernicke, Ronald

    2005-04-01

    Spinal curvature progression in scoliosis patients is monitored from X-rays, and this serial exposure to harmful radiation increases the incidence of developing cancer. With the aim of reducing the invasiveness of follow-up, this study seeks to relate the three-dimensional external surface to the internal geometry, having assumed that that the physiological links between these are sufficiently regular across patients. A database was used of 194 quasi-simultaneous acquisitions of two X-rays and a 3D laser scan of the entire trunk. Data was processed to sets of datapoints representing the trunk surface and spinal curve. Functional data analyses were performed using generalized Fourier series using a Haar basis and functional minimum noise fractions. The resulting coefficients became inputs and outputs, respectively, to an array of support vector regression (SVR) machines. SVR parameters were set based on theoretical results, and cross-validation increased confidence in the system's performance. Predicted lateral and frontal views of the spinal curve from the back surface demonstrated average L2-errors of 6.13 and 4.38 millimetres, respectively, across the test set; these compared favourably with measurement error in data. This constitutes a first robust prediction of the 3D spinal curve from external data using learning techniques.

  9. Active shape models incorporating isolated landmarks for medical image annotation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Stieltjes, Bram; Maier-Hein, Klaus H.

    2014-03-01

    Apart from their robustness in anatomic surface segmentation, purely surface based 3D Active Shape Models lack the ability to automatically detect and annotate non-surface key points of interest. However, annotation of anatomic landmarks is desirable, as it yields additional anatomic and functional information. Moreover, landmark detection might help to further improve accuracy during ASM segmentation. We present an extension of surface-based 3D Active Shape Models incorporating isolated non-surface landmarks. Positions of isolated and surface landmarks are modeled conjoint within a point distribution model (PDM). Isolated landmark appearance is described by a set of haar-like features, supporting local landmark detection on the PDM estimates using a kNN-Classi er. Landmark detection was evaluated in a leave-one-out cross validation on a reference dataset comprising 45 CT volumes of the human liver after shape space projection. Depending on the anatomical landmark to be detected, our experiments have shown in about 1/4 up to more than 1/2 of all test cases a signi cant improvement in detection accuracy compared to the position estimates delivered by the PDM. Our results encourage further research with regard to the combination of shape priors and machine learning for landmark detection within the Active Shape Model Framework.

  10. Automated kidney morphology measurements from ultrasound images using texture and edge analysis

    NASA Astrophysics Data System (ADS)

    Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin

    2016-04-01

    In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.

  11. Spectral analysis of the microcirculatory laser Doppler signal at the Hoku acupuncture point.

    PubMed

    Hsiu, Hsin; Hsu, Wei-Chen; Huang, Shih-Ming; Hsu, Chia-Liang; Lin Wang, Yuh-Ying

    2009-05-01

    We aimed to characterize the frequency spectra of skin blood flow signals recorded at Hoku, an important acupuncture point (acupoint) in oriental medicine. Electrocardiogram (ECG) and laser Doppler flowmetry signals were measured simultaneously in 31 trials on seven volunteers aged 21-27 years. A four-level Haar wavelet transform was applied to the measured 20 min laser Doppler flowmetry (LDF) signals, and periodic oscillations with five characteristic frequency peaks were obtained within the following frequency bands: 0.0095-0.021 Hz, 0.021-0.052 Hz, 0.052-0.145 Hz, 0.145-0.6 Hz, and 0.6-1.6 Hz (defined as FR1-FR5), respectively. The relative energy contribution in FR3 was significantly larger at Hoku than at the two non-acupoints. Linear regression analysis revealed that the relative energy contribution in FR3 at Hoku significantly increased with the pulse pressure (R(2) = 0.48; P < 0.01 by F-test). Spectral analysis of the flux signal revealed that one of the major microcirculatory differences between acupoints and non-acupoints was in the different myogenic responses of their vascular beds. This information may aid the development of a method for the non-invasive study of the microcirculatory characteristics of the acupoint.

  12. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    NASA Astrophysics Data System (ADS)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  13. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  14. Study on a Biometric Authentication Model based on ECG using a Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Kim, Ho J.; Lim, Joon S.

    2018-03-01

    Traditional authentication methods use numbers or graphic passwords and thus involve the risk of loss or theft. Various studies are underway regarding biometric authentication because it uses the unique biometric data of a human being. Biometric authentication technology using ECG from biometric data involves signals that record electrical stimuli from the heart. It is difficult to manipulate and is advantageous in that it enables unrestrained measurements from sensors that are attached to the skin. This study is on biometric authentication methods using the neural network with weighted fuzzy membership functions (NEWFM). In the biometric authentication process, normalization and the ensemble average is applied during preprocessing, characteristics are extracted using Haar-wavelets, and a registration process called “training” is performed in the fuzzy neural network. In the experiment, biometric authentication was performed on 73 subjects in the Physionet Database. 10-40 ECG waveforms were tested for use in the registration process, and 15 ECG waveforms were deemed the appropriate number for registering ECG waveforms. 1 ECG waveforms were used during the authentication stage to conduct the biometric authentication test. Upon testing the proposed biometric authentication method based on 73 subjects from the Physionet Database, the TAR was 98.32% and FAR was 5.84%.

  15. A statistical-textural-features based approach for classification of solid drugs using surface microscopic images.

    PubMed

    Tahir, Fahima; Fahiem, Muhammad Abuzar

    2014-01-01

    The quality of pharmaceutical products plays an important role in pharmaceutical industry as well as in our lives. Usage of defective tablets can be harmful for patients. In this research we proposed a nondestructive method to identify defective and nondefective tablets using their surface morphology. Three different environmental factors temperature, humidity and moisture are analyzed to evaluate the performance of the proposed method. Multiple textural features are extracted from the surface of the defective and nondefective tablets. These textural features are gray level cooccurrence matrix, run length matrix, histogram, autoregressive model and HAAR wavelet. Total textural features extracted from images are 281. We performed an analysis on all those 281, top 15, and top 2 features. Top 15 features are extracted using three different feature reduction techniques: chi-square, gain ratio and relief-F. In this research we have used three different classifiers: support vector machine, K-nearest neighbors and naïve Bayes to calculate the accuracies against proposed method using two experiments, that is, leave-one-out cross-validation technique and train test models. We tested each classifier against all selected features and then performed the comparison of their results. The experimental work resulted in that in most of the cases SVM performed better than the other two classifiers.

  16. Correlation and Return Interval Analysis of Tree Rings Based Temperature and Precipitation Reconstructions

    NASA Astrophysics Data System (ADS)

    Bunde, A.; Ludescher, J.; Luterbacher, J.; von Storch, H.

    2012-04-01

    We analyze tree rings based summer temperature and precipitation reconstructions from Central Europe covering the past 2500y [1], by (i) autocorrelation functions, (ii) detrended fluctuation analysis (DFA2) and (iii) the Haar wavelet technique (WT2). We also study (iv) the PDFs of the return intervals for return periods of 5y, 10y, 20y, and 40y. All results provide evidence that the data cannot be described by an AR1 process, but are long-term correlated with a Hurst exponent H close to 1 for summer temperature data and around 0.9 for summer precipitation. These results, however, are not in agreement with neither observational data of the past two centuries nor millennium simulations with contemporary climate models, which both suggest H close to 0.65 for the temperature data and H close to 0.5 for the precipitation data. In particular the strong contrast in precipitation (highly correlated for the reconstructed data, white noise for the observational and model data) rises concerns on tree rings based climate reconstructions, which will have to be taken into account in future investigations. [1] Büntgen, U., Tegel, W., Nicolussi, K., McCormick, M., Frank, D., Trouet, V., Kaplan, J.O., Herzig, F., Heussner, K.-U., Wanner, H., Luterbacher, J., and Esper, J., 2011: 2500 Years of European Climate Variability and Human Susceptibility. SCIENCE, 331, 578-582.

  17. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    PubMed Central

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  18. Wavelet detection of coherent structures in interplanetary magnetic flux ropes and its role in the intermittent turbulence

    NASA Astrophysics Data System (ADS)

    Muñoz, P. R.; Chian, A. C.

    2013-12-01

    We implement a method to detect coherent magnetic structures using the Haar discrete wavelet transform (Salem et al., ApJ 702, 537, 2009), and apply it to an event detected by Cluster at the turbulent boundary layer of an interplanetary magnetic flux rope. The wavelet method is able to detect magnetic coherent structures and extract main features of solar wind intermittent turbulence, such as the power spectral density and the scaling exponent of structure functions. Chian and Muñoz (ApJL 733, L34, 2011) investigated the relation between current sheets, turbulence, and magnetic reconnections at the leading edge of an interplanetary coronal mass ejection measured by Cluster upstream of the Earth's bow shock on 2005 January 21. We found observational evidence of two magnetically reconnected current sheets in the vicinity of a front magnetic cloud boundary layer, where the scaling exponent of structure functions of magnetic fluctuations exhibits multifractal behavior. Using the wavelet technique, we show that the current sheets associated to magnetic reconnection are part of the set of magnetic coherent structures responsible for multifractality. By removing them using a filtering criteria, it is possible to recover a self-similar scaling exponent predicted for homogeneous turbulence. Finally, we discuss an extension of the wavelet technique to study coherent structures in two-dimensional solar magnetograms.

  19. Text extraction from images in the wild using the Viola-Jones algorithm

    NASA Astrophysics Data System (ADS)

    Saabna, Raid M.; Zingboim, Eran

    2018-04-01

    Text Localization and extraction is an important issue in modern applications of computer vision. Applications such as reading and translating texts in the wild or from videos are among the many applications that can benefit results of this field. In this work, we adopt the well-known Viola-Jones algorithm to enable text extraction and localization from images in the wild. The Viola-Jones is an efficient, and a fast image-processing algorithm originally used for face detection. Based on some resemblance between text and face detection tasks in the wild, we have modified the viola-jones to detect regions of interest where text may be localized. In the proposed approach, some modification to the HAAR like features and a semi-automatic process of data set generating and manipulation were presented to train the algorithm. A process of sliding windows with different sizes have been used to scan the image for individual letters and letter clusters existence. A post processing step is used in order to combine the detected letters into words and to remove false positives. The novelty of the presented approach is using the strengths of a modified Viola-Jones algorithm to identify many different objects representing different letters and clusters of similar letters and later combine them into words of varying lengths. Impressive results were obtained on the ICDAR contest data sets.

  20. UNCOVERING THE INTRINSIC VARIABILITY OF GAMMA-RAY BURSTS

    NASA Astrophysics Data System (ADS)

    Golkhou, V. Zach; Butler, Nathaniel R

    2014-08-01

    We develop a robust technique to determine the minimum variability timescale for gamma-ray burst (GRB) light curves, utilizing Haar wavelets. Our approach averages over the data for a given GRB, providing an aggregate measure of signal variation while also retaining sensitivity to narrow pulses within complicated time series. In contrast to previous studies using wavelets, which simply define the minimum timescale in reference to the measurement noise floor, our approach identifies the signature of temporally smooth features in the wavelet scaleogram and then additionally identifies a break in the scaleogram on longer timescales as a signature of a true, temporally unsmooth light curve feature or features. We apply our technique to the large sample of Swift GRB gamma-ray light curves and for the first time—due to the presence of a large number of GRBs with measured redshift—determine the distribution of minimum variability timescales in the source frame. We find a median minimum timescale for long-duration GRBs in the source frame of Δtmin = 0.5 s, with the shortest timescale found being on the order of 10 ms. This short timescale suggests a compact central engine (3000 km). We discuss further implications for the GRB fireball model and present a tantalizing correlation between the minimum timescale and redshift, which may in part be due to cosmological time dilation.

  1. The meaning of "significance" for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas]. 1969.

    PubMed

    de Groot, A D

    2014-05-01

    Adrianus Dingeman de Groot (1914-2006) was one of the most influential Dutch psychologists. He became famous for his work "Thought and Choice in Chess", but his main contribution was methodological--De Groot co-founded the Department of Psychological Methods at the University of Amsterdam (together with R. F. van Naerssen), founded one of the leading testing and assessment companies (CITO), and wrote the monograph "Methodology" that centers on the empirical-scientific cycle: observation-induction-deduction-testing-evaluation. Here we translate one of De Groot's early articles, published in 1956 in the Dutch journal Nederlands Tijdschrift voor de Psychologie en Haar Grensgebieden. This article is more topical now than it was almost 60years ago. De Groot stresses the difference between exploratory and confirmatory ("hypothesis testing") research and argues that statistical inference is only sensible for the latter: "One 'is allowed' to apply statistical tests in exploratory research, just as long as one realizes that they do not have evidential impact". De Groot may have also been one of the first psychologists to argue explicitly for preregistration of experiments and the associated plan of statistical analysis. The appendix provides annotations that connect De Groot's arguments to the current-day debate on transparency and reproducibility in psychological science. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    PubMed

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  3. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  4. Synthesis of vibroarthrographic signals in knee osteoarthritis diagnosis training.

    PubMed

    Shieh, Chin-Shiuh; Tseng, Chin-Dar; Chang, Li-Yun; Lin, Wei-Chun; Wu, Li-Fu; Wang, Hung-Yu; Chao, Pei-Ju; Chiu, Chien-Liang; Lee, Tsair-Fwu

    2016-07-19

    Vibroarthrographic (VAG) signals are used as useful indicators of knee osteoarthritis (OA) status. The objective was to build a template database of knee crepitus sounds. Internships can practice in the template database to shorten the time of training for diagnosis of OA. A knee sound signal was obtained using an innovative stethoscope device with a goniometer. Each knee sound signal was recorded with a Kellgren-Lawrence (KL) grade. The sound signal was segmented according to the goniometer data. The signal was Fourier transformed on the correlated frequency segment. An inverse Fourier transform was performed to obtain the time-domain signal. Haar wavelet transform was then done. The median and mean of the wavelet coefficients were chosen to inverse transform the synthesized signal in each KL category. The quality of the synthesized signal was assessed by a clinician. The sample signals were evaluated using different algorithms (median and mean). The accuracy rate of the median coefficient algorithm (93 %) was better than the mean coefficient algorithm (88 %) for cross-validation by a clinician using synthesis of VAG. The artificial signal we synthesized has the potential to build a learning system for medical students, internships and para-medical personnel for the diagnosis of OA. Therefore, our method provides a feasible way to evaluate crepitus sounds that may assist in the diagnosis of knee OA.

  5. Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick

    2018-05-01

    For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.

  6. Vertical Drop Testing and Analysis of the WASP Helicopter Skid Gear

    NASA Technical Reports Server (NTRS)

    Fuchs, Yvonne T.; Jackson, Karen E.

    2008-01-01

    Human occupant modeling and injury risk assessment have been identified as areas of research for improved prediction of rotorcraft crashworthiness within the NASA Aeronautics Program's Subsonic Rotary Wing Project. As part of this effort, an experimental program was conducted to assess the impact performance of a skid gear for use on the WASP kit-built helicopter, which is marketed by HeloWerks, Inc. of Hampton, Virginia. Test data from a drop test at an impact velocity of 8.4 feet-per-second were used to assess a finite element model of the skid gear test article. This assessment included human occupant analytic models developed for execution in LS-DYNA. The test article consisted of an aluminum skid gear mounted beneath a steel plate. A seating platform was attached to the upper surface of the steel plate, and two 95th percentile Hybrid III male Aerospace Anthropomorphic Test Devices (ATDs) were seated on the platform and secured using a four-point restraint system. The goal of the test-analysis correlation is to further the understanding of LS-DYNA ATD occupant models and responses in the vertical (or spinal) direction. By correlating human occupant experimental test data for a purely vertical impact with the LS-DYNA occupant responses, improved confidence in the use of these tools and better understanding of the limitations of the automotive-based occupant models for aerospace application can begin to be developed.

  7. Vertical Drop Testing and Analysis of the WASP Helicopter Skid Gear

    NASA Technical Reports Server (NTRS)

    Fuchs, Yvonne T.; Jackson, Karen E.

    2008-01-01

    Human occupant modeling and injury risk assessment have been identified as areas of research for improved prediction of rotorcraft crashworthiness within the NASA Aeronautics Program s Subsonic Rotary Wing Project. As part of this effort, an experimental program was conducted to assess the impact performance of a skid gear for use on the WASP kit-built helicopter, which is marketed by HeloWerks, Inc. of Hampton, Virginia. Test data from a drop test at an impact velocity of 8.4 feet-per-second were used to assess a finite element model of the skid gear test article. This assessment included human occupant analytic models developed for execution in LS-DYNA. The test article consisted of an aluminum skid gear mounted beneath a steel plate. A seating platform was attached to the upper surface of the steel plate, and two 95th percentile Hybrid III male Aerospace Anthropomorphic Test Devices (ATDs) were seated on the platform and secured using a four-point restraint system. The goal of the test-analysis correlation is to further the understanding of LS-DYNA ATD occupant models and responses in the vertical (or spinal) direction. By correlating human occupant experimental test data for a purely vertical impact with the LS-DYNA occupant responses, improved confidence in the use of these tools and better understanding of the limitations of the automotive-based occupant models for aerospace application can begin to be developed.

  8. Boosting Immune Responses Against Bacterial Pathogens: In Vitro Analysis of Immunomodulators (In Vitro Analyse van de Stimulerende Werking van Verschillende Stoffen op het Immuunsysteem)

    DTIC Science & Technology

    2007-07-01

    desmuramylpeptides in combination with chemically synthesized Toll-like receptor agonists synergistically induced production of interleukin-8 in a NOD2- and NODI...biothreat agents may be an option, however there is a broad range of biothreat agents, which may become even broader as a result of genetic engeneering

  9. Machine learning for the automatic detection of anomalous events

    NASA Astrophysics Data System (ADS)

    Fisher, Wendy D.

    In this dissertation, we describe our research contributions for a novel approach to the application of machine learning for the automatic detection of anomalous events. We work in two different domains to ensure a robust data-driven workflow that could be generalized for monitoring other systems. Specifically, in our first domain, we begin with the identification of internal erosion events in earth dams and levees (EDLs) using geophysical data collected from sensors located on the surface of the levee. As EDLs across the globe reach the end of their design lives, effectively monitoring their structural integrity is of critical importance. The second domain of interest is related to mobile telecommunications, where we investigate a system for automatically detecting non-commercial base station routers (BSRs) operating in protected frequency space. The presence of non-commercial BSRs can disrupt the connectivity of end users, cause service issues for the commercial providers, and introduce significant security concerns. We provide our motivation, experimentation, and results from investigating a generalized novel data-driven workflow using several machine learning techniques. In Chapter 2, we present results from our performance study that uses popular unsupervised clustering algorithms to gain insights to our real-world problems, and evaluate our results using internal and external validation techniques. Using EDL passive seismic data from an experimental laboratory earth embankment, results consistently show a clear separation of events from non-events in four of the five clustering algorithms applied. Chapter 3 uses a multivariate Gaussian machine learning model to identify anomalies in our experimental data sets. For the EDL work, we used experimental data from two different laboratory earth embankments. Additionally, we explore five wavelet transform methods for signal denoising. The best performance is achieved with the Haar wavelets. We achieve up to 97

  10. Identifying the multiscale impacts of crude oil price shocks on the stock market in China at the sector level

    NASA Astrophysics Data System (ADS)

    Huang, Shupei; An, Haizhong; Gao, Xiangyun; Huang, Xuan

    2015-09-01

    The aim of this research is to investigate the multiscale dynamic linkages between crude oil price and the stock market in China at the sector level. First, the Haar à trous wavelet transform is implemented to extract multiscale information from the original time series. Furthermore, we incorporate the vector autoregression model to estimate the dynamic relationship pairing the Brent oil price and each sector stock index at each scale. There is a strong evidence showing that there are bidirectional Granger causality relationships between most of the sector stock indices and the crude oil price in the short, medium and long terms, except for those in the health, utility and consumption sectors. In fact, the impacts of the crude oil price shocks vary for different sectors over different time horizons. More precisely, the energy, information, material and telecommunication sector stock indices respond to crude oil price shocks negatively in the short run and positively in the medium and long runs, terms whereas the finance sector responds positively over all three time horizons. Moreover, the Brent oil price shocks have a stronger influence on the stock indices of sectors other than the health, optional and utility sectors in the medium and long terms than in the short term. The results obtained suggest implication of this paper as that the investment and policymaking decisions made during different time horizons should be based on the information gathered from each corresponding time scale.

  11. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  12. Kombinierte Hoch-/Niedrig-Dosis-Therapie mit systemischen Glukokor-tikoiden bei schweren Verlaufsformen der Alopecia areata im Kindesalter.

    PubMed

    Jahn-Bassler, Karin; Bauer, Wolfgang Michael; Karlhofer, Franz; Vossen, Matthias G; Stingl, Georg

    2017-01-01

    Schwere Verlaufsformen der Alopecia areata (AA) im Kindesalter sind aufgrund limitierter Optionen therapeutisch herausfordernd. Systemische, hochdosierte Glukokortikoide weisen die schnellste Ansprechrate auf, nach dem Absetzen kommt es allerdings zu Rezidiven. Eine längerfristige Hochdosis-Anwendung ist aufgrund der zu erwartenden Nebenwirkungen nicht empfehlenswert. Eine dauerhafte Steroiderhaltungstherapie unterhalb der Cushing-Schwellen-Dosis nach Bolustherapie könnte die Krankheitsaktivität ohne Nebenwirkungen längerfristig unterdrücken. Im Rahmen einer offenen Anwendungsbeobachtung wurden 13 Kinder mit schweren Formen der AA in diese Studie eingeschlossen. Bei sieben Kindern lag eine AA totalis/universalis vor, bei sechs eine multifokale AA mit Befall von mehr als 50 % der Kopfhaut. Das Therapieregime sah eine initiale Prednisolon-Dosierung von 2 mg/kg Körpergeweicht (KG) vor und wurde innerhalb von neun Wochen auf eine Erhaltungsdosierung unter der individuellen Cushing-Schwelle reduziert. Der Nachbeobachtungszeitraum betrug ein bis drei Jahre. Wir beobachteten in 62 % aller Fälle ein komplettes Nachwachsen der Haare. Die mittlere Dauer bis zum Ansprechen lag bei 6,6 Wochen und konnte mit der Erhaltungstherapie über den gesamten Beobachtungszeitraum aufrechterhalten werden. An Nebenwirkungen wurden ausschließlich eine Gewichtszunahme (1-3 kg) bei allen Behandelten sowie eine milde Steroidakne in 23 % der Fälle beobachtet. Die kombinierte Hoch-/Niedrig-Dosis-Therapie mit systemischen Glukokortikoiden mittels Prednisolon zeigte eine hohe, dauerhafte Ansprechrate ohne signifikante Nebenwirkungen. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.

  13. SU(p,q) coherent states and a Gaussian de Finetti theorem

    NASA Astrophysics Data System (ADS)

    Leverrier, Anthony

    2018-04-01

    We prove a generalization of the quantum de Finetti theorem when the local space is an infinite-dimensional Fock space. In particular, instead of considering the action of the permutation group on n copies of that space, we consider the action of the unitary group U(n) on the creation operators of the n modes and define a natural generalization of the symmetric subspace as the space of states invariant under unitaries in U(n). Our first result is a complete characterization of this subspace, which turns out to be spanned by a family of generalized coherent states related to the special unitary group SU(p, q) of signature (p, q). More precisely, this construction yields a unitary representation of the noncompact simple real Lie group SU(p, q). We therefore find a dual unitary representation of the pair of groups U(n) and SU(p, q) on an n(p + q)-mode Fock space. The (Gaussian) SU(p, q) coherent states resolve the identity on the symmetric subspace, which implies a Gaussian de Finetti theorem stating that tracing over a few modes of a unitary-invariant state yields a state close to a mixture of Gaussian states. As an application of this de Finetti theorem, we show that the n × n upper-left submatrix of an n × n Haar-invariant unitary matrix is close in total variation distance to a matrix of independent normal variables if n3 = O(m).

  14. Pedestrian Detection in Far-Infrared Daytime Images Using a Hierarchical Codebook of SURF

    PubMed Central

    Besbes, Bassem; Rogozan, Alexandrina; Rus, Adela-Maria; Bensrhair, Abdelaziz; Broggi, Alberto

    2015-01-01

    One of the main challenges in intelligent vehicles concerns pedestrian detection for driving assistance. Recent experiments have showed that state-of-the-art descriptors provide better performances on the far-infrared (FIR) spectrum than on the visible one, even in daytime conditions, for pedestrian classification. In this paper, we propose a pedestrian detector with on-board FIR camera. Our main contribution is the exploitation of the specific characteristics of FIR images to design a fast, scale-invariant and robust pedestrian detector. Our system consists of three modules, each based on speeded-up robust feature (SURF) matching. The first module allows generating regions-of-interest (ROI), since in FIR images of the pedestrian shapes may vary in large scales, but heads appear usually as light regions. ROI are detected with a high recall rate with the hierarchical codebook of SURF features located in head regions. The second module consists of pedestrian full-body classification by using SVM. This module allows one to enhance the precision with low computational cost. In the third module, we combine the mean shift algorithm with inter-frame scale-invariant SURF feature tracking to enhance the robustness of our system. The experimental evaluation shows that our system outperforms, in the FIR domain, the state-of-the-art Haar-like Adaboost-cascade, histogram of oriented gradients (HOG)/linear SVM (linSVM) and MultiFtrpedestrian detectors, trained on the FIR images. PMID:25871724

  15. Automatic detection of regions of interest in mammographic images

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ling, Haibin; Bakic, Predrag R.; Maidment, Andrew D. A.; Megalooikonomou, Vasileios

    2011-03-01

    This work is a part of our ongoing study aimed at comparing the topology of anatomical branching structures with the underlying image texture. Detection of regions of interest (ROIs) in clinical breast images serves as the first step in development of an automated system for image analysis and breast cancer diagnosis. In this paper, we have investigated machine learning approaches for the task of identifying ROIs with visible breast ductal trees in a given galactographic image. Specifically, we have developed boosting based framework using the AdaBoost algorithm in combination with Haar wavelet features for the ROI detection. Twenty-eight clinical galactograms with expert annotated ROIs were used for training. Positive samples were generated by resampling near the annotated ROIs, and negative samples were generated randomly by image decomposition. Each detected ROI candidate was given a confidences core. Candidate ROIs with spatial overlap were merged and their confidence scores combined. We have compared three strategies for elimination of false positives. The strategies differed in their approach to combining confidence scores by summation, averaging, or selecting the maximum score.. The strategies were compared based upon the spatial overlap with annotated ROIs. Using a 4-fold cross-validation with the annotated clinical galactographic images, the summation strategy showed the best performance with 75% detection rate. When combining the top two candidates, the selection of maximum score showed the best performance with 96% detection rate.

  16. Automatic detection of apical roots in oral radiographs

    NASA Astrophysics Data System (ADS)

    Wu, Yi; Xie, Fangfang; Yang, Jie; Cheng, Erkang; Megalooikonomou, Vasileios; Ling, Haibin

    2012-03-01

    The apical root regions play an important role in analysis and diagnosis of many oral diseases. Automatic detection of such regions is consequently the first step toward computer-aided diagnosis of these diseases. In this paper we propose an automatic method for periapical root region detection by using the state-of-theart machine learning approaches. Specifically, we have adapted the AdaBoost classifier for apical root detection. One challenge in the task is the lack of training cases especially for diseased ones. To handle this problem, we boost the training set by including more root regions that are close to the annotated ones and decompose the original images to randomly generate negative samples. Based on these training samples, the Adaboost algorithm in combination with Haar wavelets is utilized in this task to train an apical root detector. The learned detector usually generates a large amount of true and false positives. In order to reduce the number of false positives, a confidence score for each candidate detection result is calculated for further purification. We first merge the detected regions by combining tightly overlapped detected candidate regions and then we use the confidence scores from the Adaboost detector to eliminate the false positives. The proposed method is evaluated on a dataset containing 39 annotated digitized oral X-Ray images from 21 patients. The experimental results show that our approach can achieve promising detection accuracy.

  17. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  18. Pedestrian detection from thermal images: A sparse representation based approach

    NASA Astrophysics Data System (ADS)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  19. Feature Selection and Pedestrian Detection Based on Sparse Representation.

    PubMed

    Yao, Shihong; Wang, Tao; Shen, Weiming; Pan, Shaoming; Chong, Yanwen; Ding, Fei

    2015-01-01

    Pedestrian detection have been currently devoted to the extraction of effective pedestrian features, which has become one of the obstacles in pedestrian detection application according to the variety of pedestrian features and their large dimension. Based on the theoretical analysis of six frequently-used features, SIFT, SURF, Haar, HOG, LBP and LSS, and their comparison with experimental results, this paper screens out the sparse feature subsets via sparse representation to investigate whether the sparse subsets have the same description abilities and the most stable features. When any two of the six features are fused, the fusion feature is sparsely represented to obtain its important components. Sparse subsets of the fusion features can be rapidly generated by avoiding calculation of the corresponding index of dimension numbers of these feature descriptors; thus, the calculation speed of the feature dimension reduction is improved and the pedestrian detection time is reduced. Experimental results show that sparse feature subsets are capable of keeping the important components of these six feature descriptors. The sparse features of HOG and LSS possess the same description ability and consume less time compared with their full features. The ratios of the sparse feature subsets of HOG and LSS to their full sets are the highest among the six, and thus these two features can be used to best describe the characteristics of the pedestrian and the sparse feature subsets of the combination of HOG-LSS show better distinguishing ability and parsimony.

  20. Developing a multi-Kinect-system for monitoring in dairy cows: object recognition and surface analysis using wavelets.

    PubMed

    Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W

    2016-09-01

    Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.

  1. Application of fast Fourier transform cross-correlation and mass spectrometry data for accurate alignment of chromatograms.

    PubMed

    Zheng, Yi-Bao; Zhang, Zhi-Min; Liang, Yi-Zeng; Zhan, De-Jian; Huang, Jian-Hua; Yun, Yong-Huan; Xie, Hua-Lin

    2013-04-19

    Chromatography has been established as one of the most important analytical methods in the modern analytical laboratory. However, preprocessing of the chromatograms, especially peak alignment, is usually a time-consuming task prior to extracting useful information from the datasets because of the small unavoidable differences in the experimental conditions caused by minor changes and drift. Most of the alignment algorithms are performed on reduced datasets using only the detected peaks in the chromatograms, which means a loss of data and introduces the problem of extraction of peak data from the chromatographic profiles. These disadvantages can be overcome by using the full chromatographic information that is generated from hyphenated chromatographic instruments. A new alignment algorithm called CAMS (Chromatogram Alignment via Mass Spectra) is present here to correct the retention time shifts among chromatograms accurately and rapidly. In this report, peaks of each chromatogram were detected based on Continuous Wavelet Transform (CWT) with Haar wavelet and were aligned against the reference chromatogram via the correlation of mass spectra. The aligning procedure was accelerated by Fast Fourier Transform cross correlation (FFT cross correlation). This approach has been compared with several well-known alignment methods on real chromatographic datasets, which demonstrates that CAMS can preserve the shape of peaks and achieve a high quality alignment result. Furthermore, the CAMS method was implemented in the Matlab language and available as an open source package at http://www.github.com/matchcoder/CAMS. Copyright © 2013. Published by Elsevier B.V.

  2. The Berlin astronomer - Life and works of Johann Elert Bode (1747-1826) (German Title: Der Berliner Astronom - Leben und Werk von Johann Elert Bode (1747-1826) )

    NASA Astrophysics Data System (ADS)

    Schwemin, Friedhelm

    Johann Elert Bode (1747-1826), long-time director of Berlin Observatory, earned his merits by editing the “Astronomisches Jahrbuch” for many years, for producing a immaculate star atlas, and for writing a series of popular books. Today, astronomers still know the “Titius-Bode law” of planetary distances, which had been publicized by him. The author traces the life of this Hamburg-born scholar. He analyzes his works and tries to determine his place in the history of astronomy. The volume comprises texts of original documents from Bode's life, a bibliography of his works, as well as numerous historical illustrations, often published here for the first time.

  3. Über den Beitrag von Heinrich Bruns zur theoretischen geometrischen Optik - unter Berücksichtigung seines Briefwechsels mit Wissenschaftlern der Zeiss-Werke in Jena 1888 - 1893.

    NASA Astrophysics Data System (ADS)

    Ilgauds, H.-J.; Münzel, G.

    Heinrich Bruns, director of the Leipzig University Observatory, was working on theoretical geometrical optics, and applied this to practical questions. His correspondence with opticians of the Zeiss Company in Jena gives evidence of their mutual regard and inspiration.

  4. Generic pure quantum states as steady states of quasi-local dissipative dynamics

    NASA Astrophysics Data System (ADS)

    Karuvade, Salini; Johnson, Peter D.; Ticozzi, Francesco; Viola, Lorenza

    2018-04-01

    We investigate whether a generic pure state on a multipartite quantum system can be the unique asymptotic steady state of locality-constrained purely dissipative Markovian dynamics. In the tripartite setting, we show that the problem is equivalent to characterizing the solution space of a set of linear equations and establish that the set of pure states obeying the above property has either measure zero or measure one, solely depending on the subsystems’ dimension. A complete analytical characterization is given when the central subsystem is a qubit. In the N-partite case, we provide conditions on the subsystems’ size and the nature of the locality constraint, under which random pure states cannot be quasi-locally stabilized generically. Also, allowing for the possibility to approximately stabilize entangled pure states that cannot be exact steady states in settings where stabilizability is generic, our results offer insights into the extent to which random pure states may arise as unique ground states of frustration-free parent Hamiltonians. We further argue that, to a high probability, pure quantum states sampled from a t-design enjoy the same stabilizability properties of Haar-random ones as long as suitable dimension constraints are obeyed and t is sufficiently large. Lastly, we demonstrate a connection between the tasks of quasi-local state stabilization and unique state reconstruction from local tomographic information, and provide a constructive procedure for determining a generic N-partite pure state based only on knowledge of the support of any two of the reduced density matrices of about half the parties, improving over existing results.

  5. Planetary boundary layer height from CALIOP compared to radiosonde over China

    NASA Astrophysics Data System (ADS)

    Zhang, Wanchun; Guo, Jianping; Miao, Yucong; Liu, Huan; Zhang, Yong; Li, Zhengqiang; Zhai, Panmao

    2016-08-01

    Accurate estimation of planetary boundary layer height (PBLH) is key to air quality prediction, weather forecast, and assessment of regional climate change. The PBLH retrieval from the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) is expected to complement ground-based measurements due to the broad spatial coverage of satellites. In this study, CALIOP PBLHs are derived from combination of Haar wavelet and maximum variance techniques, and are further validated against PBLHs estimated from ground-based lidar at Beijing and Jinhua. Correlation coefficients between PBLHs from ground- and satellite-based lidars are 0.59 at Beijing and 0.65 at Jinhua. Also, the PBLH climatology from CALIOP and radiosonde are compiled over China during the period from 2011 to 2014. Maximum CALIOP-derived PBLH can be seen in summer as compared to lower values in other seasons. Three matchup scenarios are proposed according to the position of each radiosonde site relative to its closest CALIPSO ground tracks. For each scenario, intercomparisons were performed between CALIOP- and radiosonde-derived PBLHs, and scenario 2 is found to be better than other scenarios using difference as the criteria. In early summer afternoon over 70 % of the total radiosonde sites have PBLH values ranging from 1.6 to 2.0 km. Overall, CALIOP-derived PBLHs are well consistent with radiosonde-derived PBLHs. To our knowledge, this study is the first intercomparison of PBLH on a large scale using the radiosonde network of China, shedding important light on the data quality of initial CALIOP-derived PBLH results.

  6. Fractal density modeling of crustal heterogeneity from the KTB deep hole

    NASA Astrophysics Data System (ADS)

    Chen, Guoxiong; Cheng, Qiuming

    2017-03-01

    Fractal or multifractal concepts have significantly enlightened our understanding of crustal heterogeneity. Much attention has focused on 1/f scaling natures of physicochemical heterogeneity of Earth crust from fractal increment perspective. In this study, fractal density model from fractal clustering point of view is used to characterize the scaling behaviors of heterogeneous sources recorded at German Continental Deep Drilling Program (KTB) main hole, and of special contribution is the local and global multifractal analysis revisited by using Haar wavelet transform (HWT). Fractal density modeling of mass accumulation generalizes the unit of rock density from integer (e.g., g/cm3) to real numbers (e.g., g/cmα), so that crustal heterogeneities with respect to source accumulation are quantified by singularity strength of fractal density in α-dimensional space. From that perspective, we found that the bulk densities of metamorphic rocks exhibit fractal properties but have a weak multifractality, decreasing with the depth. The multiscaling natures of chemical logs also have been evidenced, and the observed distinct fractal laws for mineral contents are related to their different geochemical behaviors within complex lithological context. Accordingly, scaling distributions of mineral contents have been recognized as a main contributor to the multifractal natures of heterogeneous density for low-porosity crystalline rocks. This finally allows us to use de Wijs cascade process to explain the mechanism of fractal density. In practice, the proposed local singularity analysis based on HWT is suggested as an attractive high-pass filtering to amplify weak signatures of well logs as well as to delineate microlithological changes.

  7. Automatic Scoring of Multiple Semantic Attributes With Multi-Task Feature Leverage: A Study on Pulmonary Nodules in CT Images.

    PubMed

    Sihong Chen; Jing Qin; Xing Ji; Baiying Lei; Tianfu Wang; Dong Ni; Jie-Zhi Cheng

    2017-03-01

    The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.

  8. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    PubMed

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  9. Quantum Key Recycling with 8-state encoding (The Quantum One-Time Pad is more interesting than we thought)

    NASA Astrophysics Data System (ADS)

    Škorić, Boris; de Vries, Manon

    Perfect encryption of quantum states using the Quantum One-Time Pad (QOTP) requires two classical key bits per qubit. Almost-perfect encryption, with information-theoretic security, requires only slightly more than 1. We slightly improve lower bounds on the key length. We show that key length n+2log1ɛ suffices to encrypt n qubits in such a way that the cipherstate’s L1-distance from uniformity is upperbounded by ɛ. For a stricter security definition involving the ∞-norm, we prove sufficient key length n+logn+2log1ɛ+1+1nlog1δ+logln21-ɛ, where δ is a small probability of failure. Our proof uses Pauli operators, whereas previous results on the ∞-norm needed Haar measure sampling. We show how to QOTP-encrypt classical plaintext in a nontrivial way: we encode a plaintext bit as the vector ±(1,1,1)/3 on the Bloch sphere. Applying the Pauli encryption operators results in eight possible cipherstates which are equally spread out on the Bloch sphere. This encoding, especially when combined with the half-keylength option of QOTP, has advantages over 4-state and 6-state encoding in applications such as Quantum Key Recycling (QKR) and Unclonable Encryption (UE). We propose a key recycling scheme that is more efficient and can tolerate more noise than a recent scheme by Fehr and Salvail. For 8-state QOTP encryption with pseudorandom keys, we do a statistical analysis of the cipherstate eigenvalues. We present numerics up to nine qubits.

  10. ECG feature extraction and disease diagnosis.

    PubMed

    Bhyri, Channappa; Hamde, S T; Waghmare, L M

    2011-01-01

    An important factor to consider when using findings on electrocardiograms for clinical decision making is that the waveforms are influenced by normal physiological and technical factors as well as by pathophysiological factors. In this paper, we propose a method for the feature extraction and heart disease diagnosis using wavelet transform (WT) technique and LabVIEW (Laboratory Virtual Instrument Engineering workbench). LabVIEW signal processing tools are used to denoise the signal before applying the developed algorithm for feature extraction. First, we have developed an algorithm for R-peak detection using Haar wavelet. After 4th level decomposition of the ECG signal, the detailed coefficient is squared and the standard deviation of the squared detailed coefficient is used as the threshold for detection of R-peaks. Second, we have used daubechies (db6) wavelet for the low resolution signals. After cross checking the R-peak location in 4th level, low resolution signal of daubechies wavelet P waves and T waves are detected. Other features of diagnostic importance, mainly heart rate, R-wave width, Q-wave width, T-wave amplitude and duration, ST segment and frontal plane axis are also extracted and scoring pattern is applied for the purpose of heart disease diagnosis. In this study, detection of tachycardia, bradycardia, left ventricular hypertrophy, right ventricular hypertrophy and myocardial infarction have been considered. In this work, CSE ECG data base which contains 5000 samples recorded at a sampling frequency of 500 Hz and the ECG data base created by the S.G.G.S. Institute of Engineering and Technology, Nanded (Maharashtra) have been used.

  11. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward

    PubMed Central

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios

    2014-01-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. PMID:25008408

  12. An AdaBoost Based Approach to Automatic Classification and Detection of Buildings Footprints, Vegetation Areas and Roads from Satellite Images

    NASA Astrophysics Data System (ADS)

    Gonulalan, Cansu

    In recent years, there has been an increasing demand for applications to monitor the targets related to land-use, using remote sensing images. Advances in remote sensing satellites give rise to the research in this area. Many applications ranging from urban growth planning to homeland security have already used the algorithms for automated object recognition from remote sensing imagery. However, they have still problems such as low accuracy on detection of targets, specific algorithms for a specific area etc. In this thesis, we focus on an automatic approach to classify and detect building foot-prints, road networks and vegetation areas. The automatic interpretation of visual data is a comprehensive task in computer vision field. The machine learning approaches improve the capability of classification in an intelligent way. We propose a method, which has high accuracy on detection and classification. The multi class classification is developed for detecting multiple objects. We present an AdaBoost-based approach along with the supervised learning algorithm. The combi- nation of AdaBoost with "Attentional Cascade" is adopted from Viola and Jones [1]. This combination decreases the computation time and gives opportunity to real time applications. For the feature extraction step, our contribution is to combine Haar-like features that include corner, rectangle and Gabor. Among all features, AdaBoost selects only critical features and generates in extremely efficient cascade structured classifier. Finally, we present and evaluate our experimental results. The overall system is tested and high performance of detection is achieved. The precision rate of the final multi-class classifier is over 98%.

  13. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward.

    PubMed

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios; Musallam, Sam

    2014-10-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. Copyright © 2014 the American Physiological Society.

  14. [Standard of integration management at company level and its auditing].

    PubMed

    Flach, T; Hetzel, C; Mozdzanowski, M; Schian, H-M

    2006-10-01

    Responsibility at company level for the employment of workers with health-related problems or disabilities has increased, inter alia because of integration management at company level according to section 84 (2) of the German Social Code Book IX. Although several recommendations exist, no standard is available for auditing and certification. Such a standard could be a basis for granting premiums according to section 84 (3) of Book IX of the German Social Code. AUDIT AND CERTIFICATION: One product of the international "disability management" movement is the "Consensus Based Disability Management Audit" (CBDMA). The Audit is a systematic and independent measurement of the effectiveness of integration management at company level. CBDMA goals are to give evidence of the quality of the integration management implemented, to identify opportunities for improvement and recommend appropriate corrective and preventive action. In May 2006, the integration management of Ford-Werke GmbH Germany with about 23 900 employees was audited and certified as the first company in Europe. STANDARD OF INTEGRATION MANAGEMENT AT COMPANY LEVEL: In dialogue with corporate practitioners, the international standard of CBDMA has been adapted, completed and verified concerning its practicability. Process orientation is the key approach, and the structure is similar to DIN EN ISO 9001:2000. Its structure is as follows: (1) management-labour responsibility (goals and objectives, program planning, management-labour review), (2) management of resources (disability manager and DM team, employees' participation, cooperation with external partners, infrastructure), (3) communication (internal and external public relations), (4) case management (identifying cases, contact, situation analysis, planning actions, implementing actions and monitoring, process and outcome evaluation), (5) analysis and improvement (analysis and program evaluation), (6) documentation (manual, records).

  15. Local Random Quantum Circuits are Approximate Polynomial-Designs

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.; Horodecki, Michał

    2016-09-01

    We prove that local random quantum circuits acting on n qubits composed of O( t 10 n 2) many nearest neighbor two-qubit gates form an approximate unitary t-design. Previously it was unknown whether random quantum circuits were a t-design for any t > 3. The proof is based on an interplay of techniques from quantum many-body theory, representation theory, and the theory of Markov chains. In particular we employ a result of Nachtergaele for lower bounding the spectral gap of frustration-free quantum local Hamiltonians; a quasi-orthogonality property of permutation matrices; a result of Oliveira which extends to the unitary group the path-coupling method for bounding the mixing time of random walks; and a result of Bourgain and Gamburd showing that dense subgroups of the special unitary group, composed of elements with algebraic entries, are ∞-copy tensor-product expanders. We also consider pseudo-randomness properties of local random quantum circuits of small depth and prove that circuits of depth O( t 10 n) constitute a quantum t-copy tensor-product expander. The proof also rests on techniques from quantum many-body theory, in particular on the detectability lemma of Aharonov, Arad, Landau, and Vazirani. We give applications of the results to cryptography, equilibration of closed quantum dynamics, and the generation of topological order. In particular we show the following pseudo-randomness property of generic quantum circuits: Almost every circuit U of size O( n k ) on n qubits cannot be distinguished from a Haar uniform unitary by circuits of size O( n ( k-9)/11) that are given oracle access to U.

  16. Worldlines and worldsheets for non-abelian lattice field theories: Abelian color fluxes and Abelian color cycles

    NASA Astrophysics Data System (ADS)

    Gattringer, Christof; Göschl, Daniel; Marchis, Carlotta

    2018-03-01

    We discuss recent developments for exact reformulations of lattice field theories in terms of worldlines and worldsheets. In particular we focus on a strategy which is applicable also to non-abelian theories: traces and matrix/vector products are written as explicit sums over color indices and a dual variable is introduced for each individual term. These dual variables correspond to fluxes in both, space-time and color for matter fields (Abelian color fluxes), or to fluxes in color space around space-time plaquettes for gauge fields (Abelian color cycles). Subsequently all original degrees of freedom, i.e., matter fields and gauge links, can be integrated out. Integrating over complex phases of matter fields gives rise to constraints that enforce conservation of matter flux on all sites. Integrating out phases of gauge fields enforces vanishing combined flux of matter-and gauge degrees of freedom. The constraints give rise to a system of worldlines and worldsheets. Integrating over the factors that are not phases (e.g., radial degrees of freedom or contributions from the Haar measure) generates additional weight factors that together with the constraints implement the full symmetry of the conventional formulation, now in the language of worldlines and worldsheets. We discuss the Abelian color flux and Abelian color cycle strategies for three examples: the SU(2) principal chiral model with chemical potential coupled to two of the Noether charges, SU(2) lattice gauge theory coupled to staggered fermions, as well as full lattice QCD with staggered fermions. For the principal chiral model we present some simulation results that illustrate properties of the worldline dynamics at finite chemical potentials.

  17. DIY EOS: Experimentally Validated Equations of State for Planetary Fluids to GPa Pressures, Tools for Understanding Planetary Processes and Habitability

    NASA Astrophysics Data System (ADS)

    Vance, Steven; Brown, J. Michael; Bollengier, Olivier

    2016-10-01

    Sound speeds are fundamental to seismology, and provide a path allowing the accurate determination of thermodynamic potentials. Prior equations of state (EOS) for pure ammonia (Harr and Gallagher 1978, Tillner-Roth et al. 1993) are based primarily on measured densities and heat capacities. Sound speeds, not included in the fitting, are poorly predicted.We couple recent high pressure sound speed data with prior densities and heat capacities to generate a new equation of state. Our representation fits both the earlier lower pressure work as well as measured sound speeds to 4 GPa and 700 K and the Hugoniot to 70 GPa and 6000 K.In contrast to the damped polynomial representation previously used, our equation of state is based on local basis functions in the form of tensor b-splines. Regularization allows the thermodynamic surface to be continued into regimes poorly sampled by experiments. We discuss application of this framework for aqueous equations of state validated by experimental measurements. Preliminary equations of state have been prepared applying the local basis function methodology to aqueous NH3, Mg2SO4, NaCl, and Na2SO4. We describe its use for developing new equations of state, and provide some applications of the new thermodynamic data to the interior structures of gas giant planets and ocean worlds.References:L. Haar and J. S. Gallagher. Thermodynamic properties of ammonia. American Chemical Society and the American Institute of Physics for the National Bureau of Standards, 1978.R. Tillner-Roth, F. Harms-Watzenberg, and H. Baehr. Eine neue fundamentalgleichung fuer ammoniak. DKV TAGUNGSBERICHT, 20:67-67, 1993.

  18. Wavelets analysis for differentiating solid, non-macroscopic fat containing, enhancing renal masses: a pilot study

    NASA Astrophysics Data System (ADS)

    Varghese, Bino; Hwang, Darryl; Mohamed, Passant; Cen, Steven; Deng, Christopher; Chang, Michael; Duddalwar, Vinay

    2017-11-01

    Purpose: To evaluate potential use of wavelets analysis in discriminating benign and malignant renal masses (RM) Materials and Methods: Regions of interest of the whole lesion were manually segmented and co-registered from multiphase CT acquisitions of 144 patients (98 malignant RM: renal cell carcinoma (RCC) and 46 benign RM: oncocytoma, lipid-poor angiomyolipoma). Here, the Haar wavelet was used to analyze the grayscale images of the largest segmented tumor in the axial direction. Six metrics (energy, entropy, homogeneity, contrast, standard deviation (SD) and variance) derived from 3-levels of image decomposition in 3 directions (horizontal, vertical and diagonal) respectively, were used to quantify tumor texture. Independent t-test or Wilcoxon rank sum test depending on data normality were used as exploratory univariate analysis. Stepwise logistic regression and receiver operator characteristics (ROC) curve analysis were used to select predictors and assess prediction accuracy, respectively. Results: Consistently, 5 out of 6 wavelet-based texture measures (except homogeneity) were higher for malignant tumors compared to benign, when accounting for individual texture direction. Homogeneity was consistently lower in malignant than benign tumors irrespective of direction. SD and variance measured in the diagonal direction on the corticomedullary phase showed significant (p<0.05) difference between benign versus malignant tumors. The multivariate model with variance (3 directions) and SD (vertical direction) extracted from the excretory and pre-contrast phase, respectively showed an area under the ROC curve (AUC) of 0.78 (p < 0.05) in discriminating malignant from benign. Conclusion: Wavelet analysis is a valuable texture evaluation tool to add to a radiomics platforms geared at reliably characterizing and stratifying renal masses.

  19. An accuracy improvement method for the topology measurement of an atomic force microscope using a 2D wavelet transform.

    PubMed

    Yoon, Yeomin; Noh, Suwoo; Jeong, Jiseong; Park, Kyihwan

    2018-05-01

    The topology image is constructed from the 2D matrix (XY directions) of heights Z captured from the force-feedback loop controller. For small height variations, nonlinear effects such as hysteresis or creep of the PZT-driven Z nano scanner can be neglected and its calibration is quite straightforward. For large height variations, the linear approximation of the PZT-driven Z nano scanner fail and nonlinear behaviors must be considered because this would cause inaccuracies in the measurement image. In order to avoid such inaccuracies, an additional strain gauge sensor is used to directly measure displacement of the PZT-driven Z nano scanner. However, this approach also has a disadvantage in its relatively low precision. In order to obtain high precision data with good linearity, we propose a method of overcoming the low precision problem of the strain gauge while its feature of good linearity is maintained. We expect that the topology image obtained from the strain gauge sensor showing significant noise at high frequencies. On the other hand, the topology image obtained from the controller output showing low noise at high frequencies. If the low and high frequency signals are separable from both topology images, the image can be constructed so that it is represented with high accuracy and low noise. In order to separate the low frequencies from high frequencies, a 2D Haar wavelet transform is used. Our proposed method use the 2D wavelet transform for obtaining good linearity from strain gauge sensor and good precision from controller output. The advantages of the proposed method are experimentally validated by using topology images. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Detecting trace components in liquid chromatography/mass spectrometry data sets with two-dimensional wavelets

    NASA Astrophysics Data System (ADS)

    Compton, Duane C.; Snapp, Robert R.

    2007-09-01

    TWiGS (two-dimensional wavelet transform with generalized cross validation and soft thresholding) is a novel algorithm for denoising liquid chromatography-mass spectrometry (LC-MS) data for use in "shot-gun" proteomics. Proteomics, the study of all proteins in an organism, is an emerging field that has already proven successful for drug and disease discovery in humans. There are a number of constraints that limit the effectiveness of liquid chromatography-mass spectrometry (LC-MS) for shot-gun proteomics, where the chemical signals are typically weak, and data sets are computationally large. Most algorithms suffer greatly from a researcher driven bias, making the results irreproducible and unusable by other laboratories. We thus introduce a new algorithm, TWiGS, that removes electrical (additive white) and chemical noise from LC-MS data sets. TWiGS is developed to be a true two-dimensional algorithm, which operates in the time-frequency domain, and minimizes the amount of researcher bias. It is based on the traditional discrete wavelet transform (DWT), which allows for fast and reproducible analysis. The separable two-dimensional DWT decomposition is paired with generalized cross validation and soft thresholding. The Haar, Coiflet-6, Daubechie-4 and the number of decomposition levels are determined based on observed experimental results. Using a synthetic LC-MS data model, TWiGS accurately retains key characteristics of the peaks in both the time and m/z domain, and can detect peaks from noise of the same intensity. TWiGS is applied to angiotensin I and II samples run on a LC-ESI-TOF-MS (liquid-chromatography-electrospray-ionization) to demonstrate its utility for the detection of low-lying peaks obscured by noise.

  1. A space-time multiscale modelling of Earth's gravity field variations

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Panet, Isabelle; Ramillien, Guillaume; Guilloux, Frédéric

    2017-04-01

    The mass distribution within the Earth varies over a wide range of spatial and temporal scales, generating variations in the Earth's gravity field in space and time. These variations are monitored by satellites as the GRACE mission, with a 400 km spatial resolution and 10 days to 1 month temporal resolution. They are expressed in the form of gravity field models, often with a fixed spatial or temporal resolution. The analysis of these models allows us to study the mass transfers within the Earth system. Here, we have developed space-time multi-scale models of the gravity field, in order to optimize the estimation of gravity signals resulting from local processes at different spatial and temporal scales, and to adapt the time resolution of the model to its spatial resolution according to the satellites sampling. For that, we first build a 4D wavelet family combining spatial Poisson wavelets with temporal Haar wavelets. Then, we set-up a regularized inversion of inter-satellites gravity potential differences in a bayesian framework, to estimate the model parameters. To build the prior, we develop a spectral analysis, localized in time and space, of geophysical models of mass transport and associated gravity variations. Finally, we test our approach to the reconstruction of space-time variations of the gravity field due to hydrology. We first consider a global distribution of observations along the orbit, from a simplified synthetic hydrology signal comprising only annual variations at large spatial scales. Then, we consider a regional distribution of observations in Africa, and a larger number of spatial and temporal scales. We test the influence of an imperfect prior and discuss our results.

  2. Joint Transform Correlation for face tracking: elderly fall detection application

    NASA Astrophysics Data System (ADS)

    Katz, Philippe; Aron, Michael; Alfalou, Ayman

    2013-03-01

    In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.

  3. Application of RNAMlet to surface defect identification of steels

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Xu, Yang; Zhou, Peng; Wang, Lei

    2018-06-01

    As three main production lines of steels, continuous casting slabs, hot rolled steel plates and cold rolled steel strips have different surface appearances and are produced at different speeds of their production lines. Therefore, the algorithms for the surface defect identifications of the three steel products have different requirements for real-time and anti-interference. The existing algorithms cannot be adaptively applied to surface defect identification of the three steel products. A new method of adaptive multi-scale geometric analysis named RNAMlet was proposed. The idea of RNAMlet came from the non-symmetry anti-packing pattern representation model (NAM). The image is decomposed into a set of rectangular blocks asymmetrically according to gray value changes of image pixels. Then two-dimensional Haar wavelet transform is applied to all blocks. If the image background is complex, the number of blocks is large, and more details of the image are utilized. If the image background is simple, the number of blocks is small, and less computation time is needed. RNAMlet was tested with image samples of the three steel products, and compared with three classical methods of multi-scale geometric analysis, including Contourlet, Shearlet and Tetrolet. For the image samples with complicated backgrounds, such as continuous casting slabs and hot rolled steel plates, the defect identification rate obtained by RNAMlet was 1% higher than other three methods. For the image samples with simple backgrounds, such as cold rolled steel strips, the computation time of RNAMlet was one-tenth of the other three MGA methods, while the defect identification rates obtained by RNAMlet were higher than the other three methods.

  4. Computer-aided diagnosis of breast microcalcifications based on dual-tree complex wavelet transform.

    PubMed

    Jian, Wushuai; Sun, Xueyan; Luo, Shuqian

    2012-12-19

    Digital mammography is the most reliable imaging modality for breast carcinoma diagnosis and breast micro-calcifications is regarded as one of the most important signs on imaging diagnosis. In this paper, a computer-aided diagnosis (CAD) system is presented for breast micro-calcifications based on dual-tree complex wavelet transform (DT-CWT) to facilitate radiologists like double reading. Firstly, 25 abnormal ROIs were extracted according to the center and diameter of the lesions manually and 25 normal ROIs were selected randomly. Then micro-calcifications were segmented by combining space and frequency domain techniques. We extracted three texture features based on wavelet (Haar, DB4, DT-CWT) transform. Totally 14 descriptors were introduced to define the characteristics of the suspicious micro-calcifications. Principal Component Analysis (PCA) was used to transform these descriptors to a compact and efficient vector expression. Support Vector Machine (SVM) classifier was used to classify potential micro-calcifications. Finally, we used the receiver operating characteristic (ROC) curve and free-response operating characteristic (FROC) curve to evaluate the performance of the CAD system. The results of SVM classifications based on different wavelets shows DT-CWT has a better performance. Compared with other results, DT-CWT method achieved an accuracy of 96% and 100% for the classification of normal and abnormal ROIs, and the classification of benign and malignant micro-calcifications respectively. In FROC analysis, our CAD system for clinical dataset detection achieved a sensitivity of 83.5% at a false positive per image of 1.85. Compared with general wavelets, DT-CWT could describe the features more effectively, and our CAD system had a competitive performance.

  5. Computer-aided diagnosis of breast microcalcifications based on dual-tree complex wavelet transform

    PubMed Central

    2012-01-01

    Background Digital mammography is the most reliable imaging modality for breast carcinoma diagnosis and breast micro-calcifications is regarded as one of the most important signs on imaging diagnosis. In this paper, a computer-aided diagnosis (CAD) system is presented for breast micro-calcifications based on dual-tree complex wavelet transform (DT-CWT) to facilitate radiologists like double reading. Methods Firstly, 25 abnormal ROIs were extracted according to the center and diameter of the lesions manually and 25 normal ROIs were selected randomly. Then micro-calcifications were segmented by combining space and frequency domain techniques. We extracted three texture features based on wavelet (Haar, DB4, DT-CWT) transform. Totally 14 descriptors were introduced to define the characteristics of the suspicious micro-calcifications. Principal Component Analysis (PCA) was used to transform these descriptors to a compact and efficient vector expression. Support Vector Machine (SVM) classifier was used to classify potential micro-calcifications. Finally, we used the receiver operating characteristic (ROC) curve and free-response operating characteristic (FROC) curve to evaluate the performance of the CAD system. Results The results of SVM classifications based on different wavelets shows DT-CWT has a better performance. Compared with other results, DT-CWT method achieved an accuracy of 96% and 100% for the classification of normal and abnormal ROIs, and the classification of benign and malignant micro-calcifications respectively. In FROC analysis, our CAD system for clinical dataset detection achieved a sensitivity of 83.5% at a false positive per image of 1.85. Conclusions Compared with general wavelets, DT-CWT could describe the features more effectively, and our CAD system had a competitive performance. PMID:23253202

  6. ‘Electroshock Therapy’ in the Third Reich

    PubMed Central

    Rzesnitzek, Lara; Lang, Sascha

    2017-01-01

    The history of ‘electroshock therapy’ (now known as electroconvulsive therapy (ECT)) in Europe in the Third Reich is still a neglected chapter in medical history. Since Thomas Szasz’s ‘From the Slaughterhouse to the Madhouse’, prejudices have hindered a thorough historical analysis of the introduction and early application of electroshock therapy during the period of National Socialism and the Second World War. Contrary to the assumption of a ‘dialectics of healing and killing’, the introduction of electroshock therapy in the German Reich and occupied territories was neither especially swift nor radical. Electroshock therapy, much like the preceding ‘shock therapies’, insulin coma therapy and cardiazol convulsive therapy, contradicted the genetic dogma of schizophrenia, in which only one ‘treatment’ was permissible: primary prevention by sterilisation. However, industrial companies such as Siemens–Reiniger–Werke AG (SRW) embraced the new development in medical technology. Moreover, they knew how to use existing patents on the electrical anaesthesia used for slaughtering to maintain a leading position in the new electroshock therapy market. Only after the end of the official ‘euthanasia’ murder operation in August 1941, entitled T4, did the psychiatric elite begin to promote electroshock therapy as a modern ‘unspecific’ treatment in order to reframe psychiatry as an ‘honorable’ medical discipline. War-related shortages hindered even the then politically supported production of electroshock devices. Research into electroshock therapy remained minimal and was mainly concerned with internationally shared safety concerns regarding its clinical application. However, within the Third Reich, electroshock therapy was not only introduced in psychiatric hospitals, asylums, and in the Auschwitz concentration camp in order to get patients back to work, it was also modified for ‘euthanasia’ murder. PMID:27998332

  7. Evaluating Mass Analyzers as Candidates for Small, Portable, Rugged Single Point Mass Spectrometers for Analysis of Permanent Gases

    NASA Technical Reports Server (NTRS)

    Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)

    2001-01-01

    For Space Shuttle launch safety, there is a need to monitor the concentration of H2, He, O2 and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida. The System Volume is determined by measuring the entire system volume including the mass analyzer, its associated electronics, the associated vacuum system, the high vacuum pump and rough pump. Also measured are any ion gauge controllers or other required equipment. Computers are not included. Scan Time is the time required for one scan to be acquired and the data to be transferred. It is determined by measuring the time required acquiring a known number of scans and dividing by said number of scans. Limit of Detection is determined first by performing a zero-span calibration (using a 10-point data set). Then the limit of detection (LOD) is defined as 3 times the standard deviation of the zero data set. (An LOD of 10 ppm or less is considered acceptable.)

  8. Wavelet Algorithms for Illumination Computations

    NASA Astrophysics Data System (ADS)

    Schroder, Peter

    One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.

  9. Case study of small scale polytropic index in the central plasma sheet

    NASA Astrophysics Data System (ADS)

    Peng, XueXia; Cao, JinBin; Liu, WenLen; Ma, YuDuan; Lu, HaiYu; Yang, JunYing; Liu, LiuYuan; Liu, Xu; Wang, Jing; Wang, TieYan; Yu, Jiang

    2015-11-01

    This paper studies the effective polytropic index in the central plasma sheet (CPS) by using the method of Kartalev et al. (2006), which adopts the denoising technique of Haar wavelet to identify the homogeneous MHD Bernoulli integral (MBI) and has been frequently used to study the polytropic relation in the solar wind. We chose the quiet CPS crossing by Cluster C1 during the interval 08:51:00-09:19:00 UT on 03 August 2001. In the central plasma sheet, thermal pressure energy per unit mass is the most important part in MBI, and kinetic energy of fluid motion and electromagnetic energy per unit mass are less important. In the MBI, there are many peaks, which correspond to isothermal or near isothermal processes. The interval lengths of homogenous MBI regions are generally less than 1 min. The polytropic indexes are calculated by linearly fitting the data of lnp and lnn within a 16 s window, which is shifted forward by 8 s step length. Those polytropic indexes with |R|≥ 0.8 (R is the correlation coefficient between lnp and lnn) and p-value≤0.1 in the homogeneous regions are almost all in the range of [0, 1]. The mean and median effective polytropic indexes with high R and low p-value in homogeneous regions are 0.34 and 0.32 respectively, which are much different from the polytropic index obtained by traditional method (αtrad=-0.15). This result indicates that the CPS is not uniform even during quiet time and the blanket applications of polytropic law to plasma sheet may return misleading value of polytropic index. The polytropic indexes in homogeneous regions with a high correlation coefficient basically have good regression significance and are thus credible. These results are very important to understand the energy transport in magnetotail in the MHD frame.

  10. The earth radiation budget experiment: Early validation results

    NASA Astrophysics Data System (ADS)

    Smith, G. Louis; Barkstrom, Bruce R.; Harrison, Edwin F.

    The Earth Radiation Budget Experiment (ERBE) consists of radiometers on a dedicated spacecraft in a 57° inclination orbit, which has a precessional period of 2 months, and on two NOAA operational meteorological spacecraft in near polar orbits. The radiometers include scanning narrow field-of-view (FOV) and nadir-looking wide and medium FOV radiometers covering the ranges 0.2 to 5 μm and 5 to 50 μm and a solar monitoring channel. This paper describes the validation procedures and preliminary results. Each of the radiometer channels underwent extensive ground calibration, and the instrument packages include in-flight calibration facilities which, to date, show negligible changes of the instruments in orbit, except for gradual degradation of the suprasil dome of the shortwave wide FOV (about 4% per year). Measurements of the solar constant by the solar monitors, wide FOV, and medium FOV radiometers of two spacecraft agree to a fraction of a percent. Intercomparisons of the wide and medium FOV radiometers with the scanning radiometers show agreement of 1 to 4%. The multiple ERBE satellites are acquiring the first global measurements of regional scale diurnal variations in the Earth's radiation budget. These diurnal variations are verified by comparison with high temporal resolution geostationary satellite data. Other principal investigators of the ERBE Science Team are: R. Cess, SUNY, Stoneybrook; J. Coakley, NCAR; C. Duncan, M. King and A Mecherikunnel, Goddard Space Flight Center, NASA; A. Gruber and A.J. Miller, NOAA; D. Hartmann, U. Washington; F.B. House, Drexel U.; F.O. Huck, Langley Research Center, NASA; G. Hunt, Imperial College, London U.; R. Kandel and A. Berroir, Laboratory of Dynamic Meteorology, Ecole Polytechique; V. Ramanathan, U. Chicago; E. Raschke, U. of Cologne; W.L. Smith, U. of Wisconsin and T.H. Vonder Haar, Colorado State U.

  11. The Malaria System MicroApp: A New, Mobile Device-Based Tool for Malaria Diagnosis.

    PubMed

    Oliveira, Allisson Dantas; Prats, Clara; Espasa, Mateu; Zarzuela Serrat, Francesc; Montañola Sales, Cristina; Silgado, Aroa; Codina, Daniel Lopez; Arruda, Mercia Eliane; I Prat, Jordi Gomez; Albuquerque, Jones

    2017-04-25

    Malaria is a public health problem that affects remote areas worldwide. Climate change has contributed to the problem by allowing for the survival of Anopheles in previously uninhabited areas. As such, several groups have made developing news systems for the automated diagnosis of malaria a priority. The objective of this study was to develop a new, automated, mobile device-based diagnostic system for malaria. The system uses Giemsa-stained peripheral blood samples combined with light microscopy to identify the Plasmodium falciparum species in the ring stage of development. The system uses image processing and artificial intelligence techniques as well as a known face detection algorithm to identify Plasmodium parasites. The algorithm is based on integral image and haar-like features concepts, and makes use of weak classifiers with adaptive boosting learning. The search scope of the learning algorithm is reduced in the preprocessing step by removing the background around blood cells. As a proof of concept experiment, the tool was used on 555 malaria-positive and 777 malaria-negative previously-made slides. The accuracy of the system was, on average, 91%, meaning that for every 100 parasite-infected samples, 91 were identified correctly. Accessibility barriers of low-resource countries can be addressed with low-cost diagnostic tools. Our system, developed for mobile devices (mobile phones and tablets), addresses this by enabling access to health centers in remote communities, and importantly, not depending on extensive malaria expertise or expensive diagnostic detection equipment. ©Allisson Dantas Oliveira, Clara Prats, Mateu Espasa, Francesc Zarzuela Serrat, Cristina Montañola Sales, Aroa Silgado, Daniel Lopez Codina, Mercia Eliane Arruda, Jordi Gomez i Prat, Jones Albuquerque. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 25.04.2017.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, Paul B.

    Paralleling our recent computationally intensive (quasi-Monte Carlo) work for the case N=4 (e-print quant-ph/0308037), we undertake the task for N=6 of computing to high numerical accuracy, the formulas of Sommers and Zyczkowski (e-print quant-ph/0304041) for the (N{sup 2}-1)-dimensional volume and (N{sup 2}-2)-dimensional hyperarea of the (separable and nonseparable) NxN density matrices, based on the Bures (minimal monotone) metric--and also their analogous formulas (e-print quant-ph/0302197) for the (nonmonotone) flat Hilbert-Schmidt metric. With the same seven 10{sup 9} well-distributed ('low-discrepancy') sample points, we estimate the unknown volumes and hyperareas based on five additional (monotone) metrics of interest, including the Kubo-Mori and Wigner-Yanase.more » Further, we estimate all of these seven volume and seven hyperarea (unknown) quantities when restricted to the separable density matrices. The ratios of separable volumes (hyperareas) to separable plus nonseparable volumes (hyperareas) yield estimates of the separability probabilities of generically rank-6 (rank-5) density matrices. The (rank-6) separability probabilities obtained based on the 35-dimensional volumes appear to be--independently of the metric (each of the seven inducing Haar measure) employed--twice as large as those (rank-5 ones) based on the 34-dimensional hyperareas. (An additional estimate--33.9982--of the ratio of the rank-6 Hilbert-Schmidt separability probability to the rank-4 one is quite clearly close to integral too.) The doubling relationship also appears to hold for the N=4 case for the Hilbert-Schmidt metric, but not the others. We fit simple exact formulas to our estimates of the Hilbert-Schmidt separable volumes and hyperareas in both the N=4 and N=6 cases.« less

  13. On the wrong inference of long-range correlations in climate data; the case of the solar and volcanic forcing over the Tropical Pacific

    NASA Astrophysics Data System (ADS)

    Varotsos, Costas A.; Efstathiou, Maria N.

    2017-05-01

    A substantial weakness of several climate studies on long-range dependence is the conclusion of long-term memory of the climate conditions, without considering it necessary to establish the power-law scaling and to reject a simple exponential decay of the autocorrelation function. We herewith show one paradigmatic case, where a strong long-range dependence could be wrongly inferred from incomplete data analysis. We firstly apply the DFA method on the solar and volcanic forcing time series over the tropical Pacific, during the past 1000 years and the results obtained show that a statistically significant straight line fit to the fluctuation function in a log-log representation is revealed with slope higher than 0.5, which wrongly may be assumed as an indication of persistent long-range correlations in the time series. We argue that the long-range dependence cannot be concluded just from this straight line fit, but it requires the fulfilment of the two additional prerequisites i.e. reject the exponential decay of the autocorrelation function and establish the power-law scaling. In fact, the investigation of the validity of these prerequisites showed that the DFA exponent higher than 0.5 does not justify the existence of persistent long-range correlations in the temporal evolution of the solar and volcanic forcing during last millennium. In other words, we show that empirical analyses, based on these two prerequisites must not be considered as panacea for a direct proof of scaling, but only as evidence that the scaling hypothesis is plausible. We also discuss the scaling behaviour of solar and volcanic forcing data based on the Haar tool, which recently proved its ability to reliably detect the existence of the scaling effect in climate series.

  14. Practical automatic Arabic license plate recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Since 1970's, the need of an automatic license plate recognition system, sometimes referred as Automatic License Plate Recognition system, has been increasing. A license plate recognition system is an automatic system that is able to recognize a license plate number, extracted from image sensors. In specific, Automatic License Plate Recognition systems are being used in conjunction with various transportation systems in application areas such as law enforcement (e.g. speed limit enforcement) and commercial usages such as parking enforcement and automatic toll payment private and public entrances, border control, theft and vandalism control. Vehicle license plate recognition has been intensively studied in many countries. Due to the different types of license plates being used, the requirement of an automatic license plate recognition system is different for each country. [License plate detection using cluster run length smoothing algorithm ].Generally, an automatic license plate localization and recognition system is made up of three modules; license plate localization, character segmentation and optical character recognition modules. This paper presents an Arabic license plate recognition system that is insensitive to character size, font, shape and orientation with extremely high accuracy rate. The proposed system is based on a combination of enhancement, license plate localization, morphological processing, and feature vector extraction using the Haar transform. The performance of the system is fast due to classification of alphabet and numerals based on the license plate organization. Experimental results for license plates of two different Arab countries show an average of 99 % successful license plate localization and recognition in a total of more than 20 different images captured from a complex outdoor environment. The results run times takes less time compared to conventional and many states of art methods.

  15. Single Trial EEG Patterns for the Prediction of Individual Differences in Fluid Intelligence.

    PubMed

    Qazi, Emad-Ul-Haq; Hussain, Muhammad; Aboalsamh, Hatim; Malik, Aamir Saeed; Amin, Hafeez Ullah; Bamatraf, Saeed

    2016-01-01

    Assessing a person's intelligence level is required in many situations, such as career counseling and clinical applications. EEG evoked potentials in oddball task and fluid intelligence score are correlated because both reflect the cognitive processing and attention. A system for prediction of an individual's fluid intelligence level using single trial Electroencephalography (EEG) signals has been proposed. For this purpose, we employed 2D and 3D contents and 34 subjects each for 2D and 3D, which were divided into low-ability (LA) and high-ability (HA) groups using Raven's Advanced Progressive Matrices (RAPM) test. Using visual oddball cognitive task, neural activity of each group was measured and analyzed over three midline electrodes (Fz, Cz, and Pz). To predict whether an individual belongs to LA or HA group, features were extracted using wavelet decomposition of EEG signals recorded in visual oddball task and support vector machine (SVM) was used as a classifier. Two different types of Haar wavelet transform based features have been extracted from the band (0.3 to 30 Hz) of EEG signals. Statistical wavelet features and wavelet coefficient features from the frequency bands 0.0-1.875 Hz (delta low) and 1.875-3.75 Hz (delta high), resulted in the 100 and 98% prediction accuracies, respectively, both for 2D and 3D contents. The analysis of these frequency bands showed clear difference between LA and HA groups. Further, discriminative values of the features have been validated using statistical significance tests and inter-class and intra-class variation analysis. Also, statistical test showed that there was no effect of 2D and 3D content on the assessment of fluid intelligence level. Comparisons with state-of-the-art techniques showed the superiority of the proposed system.

  16. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  17. Vehicle tracking in wide area motion imagery from an airborne platform

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; van Huis, Jasper R.; Eendebak, Pieter T.; Baan, Jan

    2015-10-01

    Airborne platforms, such as UAV's, with Wide Area Motion Imagery (WAMI) sensors can cover multiple square kilometers and produce large amounts of video data. Analyzing all data for information need purposes becomes increasingly labor-intensive for an image analyst. Furthermore, the capacity of the datalink in operational areas may be inadequate to transfer all data to the ground station. Automatic detection and tracking of people and vehicles enables to send only the most relevant footage to the ground station and assists the image analysts in effective data searches. In this paper, we propose a method for detecting and tracking vehicles in high-resolution WAMI images from a moving airborne platform. For the vehicle detection we use a cascaded set of classifiers, using an Adaboost training algorithm on Haar features. This detector works on individual images and therefore does not depend on image motion stabilization. For the vehicle tracking we use a local template matching algorithm. This approach has two advantages. In the first place, it does not depend on image motion stabilization and it counters the inaccuracy of the GPS data that is embedded in the video data. In the second place, it can find matches when the vehicle detector would miss a certain detection. This results in long tracks even when the imagery is of low frame-rate. In order to minimize false detections, we also integrate height information from a 3D reconstruction that is created from the same images. By using the locations of buildings and roads, we are able to filter out false detections and increase the performance of the tracker. In this paper we show that the vehicle tracks can also be used to detect more complex events, such as traffic jams and fast moving vehicles. This enables the image analyst to do a faster and more effective search of the data.

  18. Direct push driven in situ color logging tool (CLT): technique, analysis routines, and application

    NASA Astrophysics Data System (ADS)

    Werban, U.; Hausmann, J.; Dietrich, P.; Vienken, T.

    2014-12-01

    Direct push technologies have recently seen a broad development providing several tools for in situ parameterization of unconsolidated sediments. One of these techniques is the measurement of soil colors - a proxy information that reveals to soil/sediment properties. We introduce the direct push driven color logging tool (CLT) for real-time and depth-resolved investigation of soil colors within the visible spectrum. Until now, no routines exist on how to handle high-resolved (mm-scale) soil color data. To develop such a routine, we transform raw data (CIEXYZ) into soil color surrogates of selected color spaces (CIExyY, CIEL*a*b*, CIEL*c*h*, sRGB) and denoise small-scale natural variability by Haar and Daublet4 wavelet transformation, gathering interpretable color logs over depth. However, interpreting color log data as a single application remains challenging. Additional information, such as site-specific knowledge of the geological setting, is required to correlate soil color data to specific layers properties. Hence, we exemplary provide results from a joint interpretation of in situ-obtained soil color data and 'state-of-the-art' direct push based profiling tool data and discuss the benefit of additional data. The developed routine is capable of transferring the provided information obtained as colorimetric data into interpretable color surrogates. Soil color data proved to correlate with small-scale lithological/chemical changes (e.g., grain size, oxidative and reductive conditions), especially when combined with additional direct push vertical high resolution data (e.g., cone penetration testing and soil sampling). Thus, the technique allows enhanced profiling by means of providing another reproducible high-resolution parameter for analysis subsurface conditions. This opens potential new areas of application and new outputs for such data in site investigation. It is our intention to improve color measurements by means method of application and data

  19. The astronomer of the duchess -- Life and work of Franz Xaver von Zach 1754-1832. (German Title: Der Astronom der Herzogin -- Leben und Werk von Franz Xaver von Zach 1754-1832)

    NASA Astrophysics Data System (ADS)

    Brosche, Peter

    The astronomer, geodesist, geographer and historian of science Franz Xaver von Zach (1754-1832) lived and worked in several European countries. Duke Ernst II of Saxe-Gotha-Altenburg appointed him as the founding scientist of his Seeberg Observatory. This was the place of his strongest activity. Why should we have an interest in him today? There is a rational and an emotional answer. First, he has rendered organisational services to his sciences which are equivalent to a great scientific achievement. Second, Zach was a very colourful character, travelled across many states in a time of radical changes and had connections with many colleagues and public figures. Images from his life therefore provide outlooks, insights and relations.

  20. OMEGACAM and Gravitational Lensing

    NASA Astrophysics Data System (ADS)

    Christen, Fabrice Frédéric Thiébaut

    2007-04-01

    Het proefschrift van Fabrice Christen gaat over de ontwikkeling van nieuwe methoden voor het corrigeren van (digitale) foto's van melkwegstelsels. Met deze methoden kunnen de beelden uit het heelal beter worden geanalyseerd. Het eerste gedeelte is gewijd aan het werk dat bij ESO is uitgevoerd aan de CCD's van de OmegaCAM camera, het enige instrument van de VST. OmegaCAM is een optische groothoekcamera met een beeldveld van een vierkante graad, opgebouwd uit een mozaiek van 8 bij 4 CCD's. Van elk onderdeel moeten alle kenmerken volledig bekend zijn voordat het in het CCD mozaiek geplaatst kan worden. In het tweede deel van dit proefschrift wordt de ontwikkeling van een nieuwe methode voor het corrigeren van de ``point-spread function'' (PSF) en schatten van de ellipticiteit van de melkwegstelsels besproken. De nieuwe techniek wordt getest en vergeleken met een door sterrenkundigen algemeen gebruikte methode in het veld van zwaartekrachtslenzen, de Kaiser, Squire en Broadhurst (KSB) methode. De nieuwe methode, gebaseerd op shapelet ontleding (vergelijkbaar met wavelet ontleding), gaat verder, en is sneller en theoretisch preciezer dan de KSB methode. Door gebruik te maken van de gecorrigeerde ellipticiteit, kunnen we een statistische analyse uitvoeren om er een kosmisch vervormingssignaal uit te halen. De licht vervormde beelden van de melkwegstelsels bewij zen dat de niet-homogene massaverdeling op megaparsec-schaal voornamelijk bestaat uit grote hoeveelheden donkere materie. Verder vergelijken we de schattingen van de ellipticiteit van de shapelet en KSB methode. Bovendien voeren we ook nog een melkwegstelsel-melkwegstelsel lens analyse uit op de 50 VLT Fors1 afbeeldingen en slagen we erin de belangrijkste eigenschappen van de halo's van de stelsels, die zich op een afstand van een- tot tweeduizend megaparsec (1 parsec = 3,26 lichtjaar = 3,085 x 10^16 meter) bevinden, te bepalen door gebruik te maken van twee modellen van melkwegstelselhalo's. Vergeleken met andere

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Numerous methods have been developed around the world to model the dynamic behavior and detect a faulty operating mode of a temperature sensor. In this context, we present in this study a new method based on the dependence between the fuel assembly temperature profile on control rods positions, and the coolant flow rate in a nuclear reactor. This seems to be possible since the insertion of control rods at different axial positions and variations in flow rate of the reactor coolant results in different produced thermal power in the reactor. This is closely linked to the instant fuel rod temperaturemore » profile. In a first step, we selected parameters to be used and confirmed the adequate correlation between the chosen parameters and those to be estimated by the proposed monitoring system. In the next step, we acquired and de-noised the data of corresponding parameters, the qualified data is then used to design and train the artificial neural network. The effective data denoising was done by using the wavelet transform to remove a various kind of artifacts such as inherent noise. With the suitable choice of wavelet level and smoothing method, it was possible for us to remove all the non-required artifacts with a view to verify and analyze the considered signal. In our work, several potential mother wavelet functions (Haar, Daubechies, Bi-orthogonal, Reverse Bi-orthogonal, Discrete Meyer and Symlets) were investigated to find the most similar function with the being processed signals. To implement the proposed monitoring system for the fuel rod temperature sensor (03 wire RTD sensor), we used the Bayesian artificial neural network 'BNN' technique to model the dynamic behavior of the considered sensor, the system correlate the estimated values with the measured for the concretization of the proposed system we propose an FPGA (field programmable gate array) implementation. The monitoring system use the correlation. (authors)« less

  2. Upper Mantle Seismic Structure for NE Tibet From Multiscale Tomography Method

    NASA Astrophysics Data System (ADS)

    Guo, B.; Liu, Q.; Chen, J.

    2013-12-01

    In the real seismic experiments, the spatial sampling of rays inside the studied volume is basically nonuniform because of the unequispaced distribution of the seismic stations as well as the earthquake events. The conventional seismic tomography schemes adopt fixed size of cells or grid spacing while the actual resolution varies. As a result, either the phantom velocity anomalies may be aroused in regions that are poorly illuminated by the seismic rays, or the best detailed velocity model is unable to be extracted from those with fine ray coverage. We present an adaptive wavelet parameterization solution for three-dimensional traveltime seismic tomography problem and apply it to the study of the tectonics in the Northeast Tibet region. Different from the traditional parameterization schemes, we discretize the velocity model in terms of the Haar wavelets and the parameters are adjusted adaptively based on both the density and the azimuthal coverage of rays. Therefore, the fine grids are used in regions with the good data coverage, whereas the poorly resolved areas are represented by the coarse grids. Using the traveltime data recorded by the portable seismic array and the regional seismic network in the northeastern Tibet area, we investigate the P wave velocity structure of the crust and upper mantle. Our results show that the structure of the crust and upper mantle in the northeastern Tibet region manifests a strong laterally inhomogeneity, which appears not only in the adjacent areas between the different blocks, but also within each block. The velocity of the crust and upper mantle is highly different between the northeastern Tibet and the Ordos plateau. Of these two regions, the former possesses a low-velocity feature while the latter is referred to a high-velocity pattern. Between the northeastern Tibet and the Ordos plateau, there is a transition zone of about 200km wide, which is associated with an extremely complex velocity structure in crust and upper

  3. [Atmospheric parameter estimation for LAMOST/GUOSHOUJING spectra].

    PubMed

    Lu, Yu; Li, Xiang-Ru; Yang, Tan

    2014-11-01

    It is a key task to estimate the atmospheric parameters from the observed stellar spectra in exploring the nature of stars and universe. With our Large Sky Area Multi-Object Fiber Spectroscopy Telescope (LAMOST) which begun its formal Sky Survey in September 2012, we are obtaining a mass of stellar spectra in an unprecedented speed. It has brought a new opportunity and a challenge for the research of galaxies. Due to the complexity of the observing system, the noise in the spectrum is relatively large. At the same time, the preprocessing procedures of spectrum are also not ideal, such as the wavelength calibration and the flow calibration. Therefore, there is a slight distortion of the spectrum. They result in the high difficulty of estimating the atmospheric parameters for the measured stellar spectra. It is one of the important issues to estimate the atmospheric parameters for the massive stellar spectra of LAMOST. The key of this study is how to eliminate noise and improve the accuracy and robustness of estimating the atmospheric parameters for the measured stellar spectra. We propose a regression model for estimating the atmospheric parameters of LAMOST stellar(SVM(lasso)). The basic idea of this model is: First, we use the Haar wavelet to filter spectrum, suppress the adverse effects of the spectral noise and retain the most discrimination information of spectrum. Secondly, We use the lasso algorithm for feature selection and extract the features of strongly correlating with the atmospheric parameters. Finally, the features are input to the support vector regression model for estimating the parameters. Because the model has better tolerance to the slight distortion and the noise of the spectrum, the accuracy of the measurement is improved. To evaluate the feasibility of the above scheme, we conduct experiments extensively on the 33 963 pilot surveys spectrums by LAMOST. The accuracy of three atmospheric parameters is log Teff: 0.006 8 dex, log g: 0.155 1 dex

  4. Performance Analysis of Integrated Wireless Sensor and Multibeam Satellite Networks Under Terrestrial Interference

    PubMed Central

    Li, Hongjun; Yin, Hao; Gong, Xiangwu; Dong, Feihong; Ren, Baoquan; He, Yuanzhi; Wang, Jingchao

    2016-01-01

    This paper investigates the performance of integrated wireless sensor and multibeam satellite networks (IWSMSNs) under terrestrial interference. The IWSMSNs constitute sensor nodes (SNs), satellite sinks (SSs), multibeam satellite and remote monitoring hosts (RMHs). The multibeam satellite covers multiple beams and multiple SSs in each beam. The SSs can be directly used as SNs to transmit sensing data to RMHs via the satellite, and they can also be used to collect the sensing data from other SNs to transmit to the RMHs. We propose the hybrid one-dimensional (1D) and 2D beam models including the equivalent intra-beam interference factor β from terrestrial communication networks (TCNs) and the equivalent inter-beam interference factor α from adjacent beams. The terrestrial interference is possibly due to the signals from the TCNs or the signals of sinks being transmitted to other satellite networks. The closed-form approximations of capacity per beam are derived for the return link of IWSMSNs under terrestrial interference by using the Haar approximations where the IWSMSNs experience the Rician fading channel. The optimal joint decoding capacity can be considered as the upper bound where all of the SSs’ signals can be jointly decoded by a super-receiver on board the multibeam satellite or a gateway station that knows all of the code books. While the linear minimum mean square error (MMSE) capacity is where all of the signals of SSs are decoded singularly by a multibeam satellite or a gateway station. The simulations show that the optimal capacities are obviously higher than the MMSE capacities under the same conditions, while the capacities are lowered by Rician fading and converge as the Rician factor increases. α and β jointly affect the performance of hybrid 1D and 2D beam models, and the number of SSs also contributes different effects on the optimal capacity and MMSE capacity of the IWSMSNs. PMID:27754438

  5. SU-C-304-05: Use of Local Noise Power Spectrum and Wavelets in Comprehensive EPID Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Gopal, A; Yan, G

    2015-06-15

    Purpose: As EPIDs are increasingly used for IMRT QA and real-time treatment verification, comprehensive quality assurance (QA) of EPIDs becomes critical. Current QA with phantoms such as the Las Vegas and PIPSpro™ can fail in the early detection of EPID artifacts. Beyond image quality assessment, we propose a quantitative methodology using local noise power spectrum (NPS) to characterize image noise and wavelet transform to identify bad pixels and inter-subpanel flat-fielding artifacts. Methods: A total of 93 image sets including bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Quantitative metrics such asmore » modulation transform function (MTF), NPS and detective quantum efficiency (DQE) were computed for each image set. Local 2D NPS was calculated for each subpanel. A 1D NPS was obtained by radial averaging the 2D NPS and fitted to a power-law function. R-square and slope of the linear regression analysis were used for panel performance assessment. Haar wavelet transformation was employed to identify pixel defects and non-uniform gain correction across subpanels. Results: Overall image quality was assessed with DQE based on empirically derived area under curve (AUC) thresholds. Using linear regression analysis of 1D NPS, panels with acceptable flat fielding were indicated by r-square between 0.8 and 1, and slopes of −0.4 to −0.7. However, for panels requiring flat fielding recalibration, r-square values less than 0.8 and slopes from +0.2 to −0.4 were observed. The wavelet transform successfully identified pixel defects and inter-subpanel flat fielding artifacts. Standard QA with the Las Vegas and PIPSpro phantoms failed to detect these artifacts. Conclusion: The proposed QA methodology is promising for the early detection of imaging and dosimetric artifacts of EPIDs. Local NPS can accurately characterize the noise level within each subpanel, while the wavelet transforms can detect bad

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, D; Aryal, M; Samuels, S

    Purpose: A previous study showed that large sub-volumes of tumor with low blood volume (BV) (poorly perfused) in head-and-neck (HN) cancers are significantly associated with local-regional failure (LRF) after chemoradiation therapy, and could be targeted with intensified radiation doses. This study aimed to develop an automated and scalable model to extract voxel-wise contrast-enhanced temporal features of dynamic contrastenhanced (DCE) MRI in HN cancers for predicting LRF. Methods: Our model development consists of training and testing stages. The training stage includes preprocessing of individual-voxel DCE curves from tumors for intensity normalization and temporal alignment, temporal feature extraction from the curves, featuremore » selection, and training classifiers. For feature extraction, multiresolution Haar discrete wavelet transformation is applied to each DCE curve to capture temporal contrast-enhanced features. The wavelet coefficients as feature vectors are selected. Support vector machine classifiers are trained to classify tumor voxels having either low or high BV, for which a BV threshold of 7.6% is previously established and used as ground truth. The model is tested by a new dataset. The voxel-wise DCE curves for training and testing were from 14 and 8 patients, respectively. A posterior probability map of the low BV class was created to examine the tumor sub-volume classification. Voxel-wise classification accuracy was computed to evaluate performance of the model. Results: Average classification accuracies were 87.2% for training (10-fold crossvalidation) and 82.5% for testing. The lowest and highest accuracies (patient-wise) were 68.7% and 96.4%, respectively. Posterior probability maps of the low BV class showed the sub-volumes extracted by our model similar to ones defined by the BV maps with most misclassifications occurred near the sub-volume boundaries. Conclusion: This model could be valuable to support adaptive clinical trials with

  7. [Eyelid hygiene for contact lens wearers with blepharitis. Comparative investigation of treatment with baby shampoo versus phospholipid solution].

    PubMed

    Khaireddin, R; Hueber, A

    2013-02-01

    Blepharitis due to Meibom gland dysfunction (MGD) is presumed to be one of the main reasons for dry eye symptoms which occur in up to 50% of contact lens users. Thus, MGD presumably plays an important role in dry eye in contact lens wearers. In the present prospective, randomized and double blind trial the efficacy of two established treatment options for MGD and blepharitis was evaluated in symptomatic contact lens wearers. In this prospective, randomized 2-centre trial 53 symptomatic contact lens wearers suffering from blepharitis were included. Patients were randomly selected for two treatment groups: group A performed lid margin hygiene using the commonly recommended mild baby shampoo (Bübchen Kinder Shampoo-extra augenmild, Bübchen Werk Ewald Hermes Pharmazeutische Fabrik GmbH, Soest, Germany) and group B performed lid margin hygiene using a phospholipid-liposome solution specially designed for lid hygiene (Blepha Cura, Optima, Moosburg/Wang, Germany), each for 4 weeks. Before as well as 4 weeks after initiation of this study the following tests were performed: standardized subjective assessment using the ocular surface disease index, non-invasive break-up time (NIBUT) and objective evaluation of lid-parallel conjunctival folds (LIPCOF) and further lid margin criteria by double blinded evaluation of slit lamp photographs. Of the 53 symptomatic contact lens wearers suffering from blepharitis 21 (39,6%) were randomly selected for treatment group A and 32 (60.4%) for group B. In both treatment groups there was objective and subjective improvement of symptoms of dry eye in contact lens wearers. Interestingly, there was a significantly greater improvement, subjective as well as objective, in treatment group B which used the phospholipidliposome solution for lid margin hygiene compared to group A using baby shampoo. Although both therapies improved symptoms of dry eye due to blepharitis in symptomatic contact lens wearers, patients using phospholipid

  8. Operator Spreading in Random Unitary Circuits

    NASA Astrophysics Data System (ADS)

    Nahum, Adam; Vijay, Sagar; Haah, Jeongwan

    2018-04-01

    Random quantum circuits yield minimally structured models for chaotic quantum dynamics, which are able to capture, for example, universal properties of entanglement growth. We provide exact results and coarse-grained models for the spreading of operators by quantum circuits made of Haar-random unitaries. We study both 1 +1 D and higher dimensions and argue that the coarse-grained pictures carry over to operator spreading in generic many-body systems. In 1 +1 D , we demonstrate that the out-of-time-order correlator (OTOC) satisfies a biased diffusion equation, which gives exact results for the spatial profile of the OTOC and determines the butterfly speed vB. We find that in 1 +1 D , the "front" of the OTOC broadens diffusively, with a width scaling in time as t1 /2. We address fluctuations in the OTOC between different realizations of the random circuit, arguing that they are negligible in comparison to the broadening of the front within a realization. Turning to higher dimensions, we show that the averaged OTOC can be understood exactly via a remarkable correspondence with a purely classical droplet growth problem. This implies that the width of the front of the averaged OTOC scales as t1 /3 in 2 +1 D and as t0.240 in 3 +1 D (exponents of the Kardar-Parisi-Zhang universality class). We support our analytic argument with simulations in 2 +1 D . We point out that, in two or higher spatial dimensions, the shape of the spreading operator at late times is affected by underlying lattice symmetries and, in general, is not spherical. However, when full spatial rotational symmetry is present in 2 +1 D , our mapping implies an exact asymptotic form for the OTOC, in terms of the Tracy-Widom distribution. For an alternative perspective on the OTOC in 1 +1 D , we map it to the partition function of an Ising-like statistical mechanics model. As a result of special structure arising from unitarity, this partition function reduces to a random walk calculation which can be

  9. Human ear detection in the thermal infrared spectrum

    NASA Astrophysics Data System (ADS)

    Abaza, Ayman; Bourlai, Thirimachos

    2012-06-01

    In this paper the problem of human ear detection in the thermal infrared (IR) spectrum is studied in order to illustrate the advantages and limitations of the most important steps of ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible and thermal profile face images. The thermal data was collected using a high definition middle-wave infrared (3-5 microns) camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based ear detection method is developed for real-time segmentation of human ears in either day or night time environments. The proposed method is based on Haar features forming a cascaded AdaBoost classifier (our modified version of the original Viola-Jones approach1 that was designed to be applied mainly in visible band images). The main advantage of the proposed method, applied on our profile face image data set collected in the thermal-band, is that it is designed to reduce the learning time required by the original Viola-Jones method from several weeks to several hours. Unlike other approaches reported in the literature, which have been tested but not designed to operate in the thermal band, our method yields a high detection accuracy that reaches ~ 91.5%. Further analysis on our data set yielded that: (a) photometric normalization techniques do not directly improve ear detection performance. However, when using a certain photometric normalization technique (CLAHE) on falsely detected images, the detection rate improved by ~ 4%; (b) the high detection accuracy of our method did not degrade when we lowered down the original spatial resolution of thermal ear images. For example, even after using one third of the original spatial resolution (i.e. ~ 20% of the original computational time) of the thermal profile face images, the high ear detection accuracy of our method

  10. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI.

    PubMed

    Ramskill, N P; Bush, I; Sederman, A J; Mantle, M D; Benning, M; Anger, B C; Appel, M; Gladden, L F

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi=1.89±0.03ftday(-1), 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution that has

  11. Multiresolution analysis of the spatiotemporal variability in global radiation observed by a dense network of 99 pyranometers

    NASA Astrophysics Data System (ADS)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Witthuhn, Jonas; Macke, Andreas

    2017-03-01

    The time series of global radiation observed by a dense network of 99 autonomous pyranometers during the HOPE campaign around Jülich, Germany, are investigated with a multiresolution analysis based on the maximum overlap discrete wavelet transform and the Haar wavelet. For different sky conditions, typical wavelet power spectra are calculated to quantify the timescale dependence of variability in global transmittance. Distinctly higher variability is observed at all frequencies in the power spectra of global transmittance under broken-cloud conditions compared to clear, cirrus, or overcast skies. The spatial autocorrelation function including its frequency dependence is determined to quantify the degree of similarity of two time series measurements as a function of their spatial separation. Distances ranging from 100 m to 10 km are considered, and a rapid decrease of the autocorrelation function is found with increasing frequency and distance. For frequencies above 1/3 min-1 and points separated by more than 1 km, variations in transmittance become completely uncorrelated. A method is introduced to estimate the deviation between a point measurement and a spatially averaged value for a surrounding domain, which takes into account domain size and averaging period, and is used to explore the representativeness of a single pyranometer observation for its surrounding region. Two distinct mechanisms are identified, which limit the representativeness; on the one hand, spatial averaging reduces variability and thus modifies the shape of the power spectrum. On the other hand, the correlation of variations of the spatially averaged field and a point measurement decreases rapidly with increasing temporal frequency. For a grid box of 10 km × 10 km and averaging periods of 1.5-3 h, the deviation of global transmittance between a point measurement and an area-averaged value depends on the prevailing sky conditions: 2.8 (clear), 1.8 (cirrus), 1.5 (overcast), and 4.2 % (broken

  12. Use of local noise power spectrum and wavelet analysis in quantitative image quality assurance for EPIDs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Soyoung

    Purpose: To investigate the use of local noise power spectrum (NPS) to characterize image noise and wavelet analysis to isolate defective pixels and inter-subpanel flat-fielding artifacts for quantitative quality assurance (QA) of electronic portal imaging devices (EPIDs). Methods: A total of 93 image sets including custom-made bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Global quantitative metrics such as modulation transform function (MTF), NPS, and detective quantum efficiency (DQE) were computed for each image set. Local NPS was also calculated for individual subpanels by sampling region of interests within each subpanelmore » of the EPID. The 1D NPS, obtained by radially averaging the 2D NPS, was fitted to a power-law function. The r-square value of the linear regression analysis was used as a singular metric to characterize the noise properties of individual subpanels of the EPID. The sensitivity of the local NPS was first compared with the global quantitative metrics using historical image sets. It was then compared with two commonly used commercial QA systems with images collected after applying two different EPID calibration methods (single-level gain and multilevel gain). To detect isolated defective pixels and inter-subpanel flat-fielding artifacts, Haar wavelet transform was applied on the images. Results: Global quantitative metrics including MTF, NPS, and DQE showed little change over the period of data collection. On the contrary, a strong correlation between the local NPS (r-square values) and the variation of the EPID noise condition was observed. The local NPS analysis indicated image quality improvement with the r-square values increased from 0.80 ± 0.03 (before calibration) to 0.85 ± 0.03 (after single-level gain calibration) and to 0.96 ± 0.03 (after multilevel gain calibration), while the commercial QA systems failed to distinguish the image quality improvement between

  13. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    NASA Astrophysics Data System (ADS)

    Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2011-06-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  14. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  15. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction.

    PubMed

    Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2011-06-07

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  16. Indications of noncontinuous PVT-behaviour of H2O at high P-T conditions

    NASA Astrophysics Data System (ADS)

    Mirwald, P. W.

    2003-04-01

    The understanding of the properties of H_2O is still limited despite its apparently tri-vial chemical composition and unique importance. In contrast to the complex system of ice and amorphous water being revealed at low temperature and high pressure (1) the P-T field of water is still relatively unexplored. The steam tables (2) suggest an apparently continuous volume behaviour over the covered P-T range. However, a number of diffraction experiments in the ambient temperature range at high pressure indicate changes in the co-ordination of the H_2O molecules (e.g. 3). A re-examination of literature data on the melting of ice I--VII and the PVT-behaviour of water the range of 20 to 300^oC and 1 to 20 kbar has recently been conducted (4). The detailed evaluation indicated anomalous behaviour of water at some 2--4 and 7--8 Kb and thus three different regimes of steam behaviour. Own preliminary data from compression experiments at 25^oC (5) confirm these two anomalies. In addition the steam data indicate non-continuous compression behaviour also towards higher temperatures (4). Again three different areas of different PVT behaviour of steam may be distinguished divided by two anomaly boundaries of shallow dP/dT slope at some 10 and 20 kb. However, the correlation between the topologies at low and high temperatures is not clear. Solution data at high P-T conditions e.g. on corundum (6) and on quartz (7) show a significant discontinuous behaviour if Δsol./ΔP is plotted vs. pressure. So, at 700^oC discontinuous solubility changes are encountered at 10 kbar and at 19 kbar what is in agreement with the steam table data. Furthermore, a continuation of these anomalies to even higher temperatures is insinuated by the steam table data. If this is correct these anomalies would be of significance for partitioning and transport processes in the deep crust and the upper mantle of the earth. (1) Petrenko and Whitnorth (1999): Physics of Ice, Oxford Univ.Press, 1999. (2) Haar

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livine, Etera R.

    We introduce the set of framed (convex) polyhedra with N faces as the symplectic quotient C{sup 2N}//SU(2). A framed polyhedron is then parametrized by N spinors living in C{sup 2} satisfying suitable closure constraints and defines a usual convex polyhedron plus extra U(1) phases attached to each face. We show that there is a natural action of the unitary group U(N) on this phase space, which changes the shape of faces and allows to map any (framed) polyhedron onto any other with the same total (boundary) area. This identifies the space of framed polyhedra to the Grassmannian space U(N)/ (SU(2)×U(N−2)).more » We show how to write averages of geometrical observables (polynomials in the faces' area and the angles between them) over the ensemble of polyhedra (distributed uniformly with respect to the Haar measure on U(N)) as polynomial integrals over the unitary group and we provide a few methods to compute these integrals systematically. We also use the Itzykson-Zuber formula from matrix models as the generating function for these averages and correlations. In the quantum case, a canonical quantization of the framed polyhedron phase space leads to the Hilbert space of SU(2) intertwiners (or, in other words, SU(2)-invariant states in tensor products of irreducible representations). The total boundary area as well as the individual face areas are quantized as half-integers (spins), and the Hilbert spaces for fixed total area form irreducible representations of U(N). We define semi-classical coherent intertwiner states peaked on classical framed polyhedra and transforming consistently under U(N) transformations. And we show how the U(N) character formula for unitary transformations is to be considered as an extension of the Itzykson-Zuber to the quantum level and generates the traces of all polynomial observables over the Hilbert space of intertwiners. We finally apply the same formalism to two dimensions and show that classical (convex) polygons can be described

  18. A Quantum Annealing Computer Team Addresses Climate Change Predictability

    NASA Technical Reports Server (NTRS)

    Halem, M. (Principal Investigator); LeMoigne, J.; Dorband, J.; Lomonaco, S.; Yesha, Ya.; Simpson, D.; Clune, T.; Pelissier, C.; Nearing, G.; Gentine, P.; hide

    2016-01-01

    converted to a BM algorithm implementation on the QAC. The first integer adder has been implemented on the D-Wave 2X by A. Shehab that will perform HAAR wavelets for image compression of MODIS scenes. Finally, based on the next generations of QACs, we are preparing a 5-year performance road map on the scalability of the current QAC algorithms.

  19. Gravity Waves characteristics and their impact on turbulent transport above an Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    Cava, Daniela; Giostra, Umberto; Katul, Gabriel

    2016-04-01

    -resolution decomposition based on the Haar wavelet has been applied to separate gravity waves from turbulent fluctuations in case of a sufficiently defined spectral gap. Statistics computed after removing wavy disturbances highlight the large impact of gravity waves on second order turbulent quantities. One of the most impacted parameters is turbulent kinetic energy, in particular in the longitudinal and lateral components. The effect of wave activity on momentum and scalar fluxes is more complex because waves can produce large errors in sign and magnitude of computed turbulent fluxes or they themselves can contribute to intermittent turbulent mixing. The proposed filtering procedure based on the multi-resolution decomposition restored the correct sign in the turbulent sensible heat flux values. These findings highlight the significance of a correct evaluation of the impact of wave components when the goal is determining the turbulent transport component of mass and energy in the SBL.

  20. Michael Gottlieb Hansch (1683-1749), Ulrich Junius (1670-1726) and the attempt to edit the works and letters of Johannes Kepler. (German Title: Michael Gottlieb Hansch (1683-1749), Ulrich Junius (1670-1726) und der Versuch einer Edition der Werke und Briefe Johannes Keplers)

    NASA Astrophysics Data System (ADS)

    Döring, Detlef

    Johannes Kepler's manuscripts which remained after his death suffered a troubled fate. It was not possible to collect them in Germany and to work with them systematically, because their importance was strikingly underestimated. Only at the beginning of the 18th century U. Junius in Leipzig tried unsuccessfully to collect and to publish the most important manuscripts. Afterwards M.G. Hansch took up this plan and pursued it until the end of his life. However, the only result was one volume with unpublished letters which appeared in 1718. The hoped-for collected works could not be realized. These events are described in detail, especially the efforts of Junius and Hansch as well as the opposition which eventually lead to a failure of both attempts.

  1. FOREWORD: The 70th birthday of Professor Stig Stenholm The 70th birthday of Professor Stig Stenholm

    NASA Astrophysics Data System (ADS)

    Suominen, Kalle-Antti

    2010-09-01

    It is not easy to assess, or even to describe correctly a long and distinguished career that started about the time when I was born. In 1964 Stig Stenholm got both an engineering degree at the Helsinki University of Technology (HUT), and an MSc degree (in Mathematics) at the University of Helsinki. The two degrees demonstrate Stig's ability to understand both complex mathematics and experimental physics. Statistical physics or rather, quantum liquids, was the field in which Stig got his DPhil at Oxford in 1967, under the guidance of Dirk ter Haar. It is interesting that together they worked on studying fermions in a bosonic background [1]; at the time this meant, of course, 3He atoms as impurities in 4He liquid, but nowadays one would immediately connect such systems to the physics of cold atomic gases. The postdoctoral period in 1967-1968 at Yale University brought Stig in contact with Willis Lamb and laser physics [2]. Back in Finland, Stig's career in the 1970s was dominated by theoretical studies of gas lasers, especially pressure and collision effects on spectral lines and saturation spectroscopy, together with his first PhD student, Rainer Salomaa. A professorship at the University of Helsinki came in 1974, and in 1980 an important era started as Stig became the scientific director of the Research Institute for Theoretical Physics (TFT). At that time he also developed the semiclassical theory of laser cooling especially with Juha Javanainen. The laser spectroscopy work led to a textbook in 1984 [3], and the semiclassical laser cooling theory was summarized in a review article in 1986 [4]. These were not, of course, his only interests, as he also worked on free-electron lasers, ring-laser gyroscopes, multiphoton processes and quantum amplifiers. In an article written in 1990 in honour of Olli Lounasmaa [5], the founder of the famous Low Temperature Laboratory at HUT, Stig mentions that one of his most memorable achievements was acting as a bridge between the

  2. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.

  3. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters

  4. On simulating large earthquakes by Green's-function addition of smaller earthquakes

    NASA Astrophysics Data System (ADS)

    Joyner, William B.; Boore, David M.

    the ω-squared model with similarity, but not at high frequency. Interestingly, the high-frequency scaling implied by this latter choice of η and κ corresponds to an ω-squared model with constant Moƒ4o—a scaling law proposed by Nuttli, although questioned recently by Haar and others. Simple scaling with κ equal to unity and η equal to the moment ratio would work if the high-frequency spectral decay were ω-1.5 instead of ω-2. Just the required decay is exhibited by the stochastic source model recently proposed by Joynet, if the dislocation-time function is deconvolved out of the spectrum. Simulated motions derived from such source models could be used as subevents rather than recorded motions as is usually done. This strategy is a promising approach to simulation of ground motion from an extended rupture.

  5. Filtering of the Radon transform to enhance linear signal features via wavelet pyramid decomposition

    NASA Astrophysics Data System (ADS)

    Meckley, John R.

    1995-09-01

    The information content in many signal processing applications can be reduced to a set of linear features in a 2D signal transform. Examples include the narrowband lines in a spectrogram, ship wakes in a synthetic aperture radar image, and blood vessels in a medical computer-aided tomography scan. The line integrals that generate the values of the projections of the Radon transform can be characterized as a bank of matched filters for linear features. This localization of energy in the Radon transform for linear features can be exploited to enhance these features and to reduce noise by filtering the Radon transform with a filter explicitly designed to pass only linear features, and then reconstructing a new 2D signal by inverting the new filtered Radon transform (i.e., via filtered backprojection). Previously used methods for filtering the Radon transform include Fourier based filtering (a 2D elliptical Gaussian linear filter) and a nonlinear filter ((Radon xfrm)**y with y >= 2.0). Both of these techniques suffer from the mismatch of the filter response to the true functional form of the Radon transform of a line. The Radon transform of a line is not a point but is a function of the Radon variables (rho, theta) and the total line energy. This mismatch leads to artifacts in the reconstructed image and a reduction in achievable processing gain. The Radon transform for a line is computed as a function of angle and offset (rho, theta) and the line length. The 2D wavelet coefficients are then compared for the Haar wavelets and the Daubechies wavelets. These filter responses are used as frequency filters for the Radon transform. The filtering is performed on the wavelet pyramid decomposition of the Radon transform by detecting the most likely positions of lines in the transform and then by convolving the local area with the appropriate response and zeroing the pyramid coefficients outside of the response area. The response area is defined to contain 95% of the total

  6. Concurrent segmentation of the prostate on MRI and CT via linked statistical shape models for radiotherapy planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhury, Najeeb; Toth, Robert; Chappelow, Jonathan

    2012-04-15

    Purpose: Prostate gland segmentation is a critical step in prostate radiotherapy planning, where dose plans are typically formulated on CT. Pretreatment MRI is now beginning to be acquired at several medical centers. Delineation of the prostate on MRI is acknowledged as being significantly simpler to perform, compared to delineation on CT. In this work, the authors present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model (SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This framework is particularly relevant in scenarios where accurate boundary delineations ofmore » the SOI on one of the modalities may not be readily available, or difficult to obtain, for training a SSM. In this work the authors apply the LSSM in the context of multimodal prostate segmentation for radiotherapy planning, where the prostate is concurrently segmented on MRI and CT. Methods: The framework comprises a number of logically connected steps. The first step utilizes multimodal registration of MRI and CT to map 2D boundary delineations of the prostate from MRI onto corresponding CT images, for a set of training studies. Hence, the scheme obviates the need for expert delineations of the gland on CT for explicitly constructing a SSM for prostate segmentation on CT. The delineations of the prostate gland on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the building of the LSSM. In order to perform concurrent prostate MRI and CT segmentation using the LSSM, the authors employ a region-based level set approach where the authors deform the evolving prostate boundary to simultaneously fit to MRI and CT images in which voxels are classified to be either part of the prostate or outside the prostate. The classification is facilitated by using a combination of MRI-CT probabilistic spatial atlases and a random forest classifier, driven by gradient and Haar features

  7. On the contribution of Heinrich Bruns to theoretical geometrical optics. With consideration of his correspondence with scientists of the Zeiss Company in Jena 1888-1893. (German Title: Über den Beitrag von Heinrich Bruns zur theoretischen geometrischen Optik Unter Berücksichtigung seines Briefwechsels mit Wissenschaftlern der Zeiss-Werke in Jena 1888-1893)

    NASA Astrophysics Data System (ADS)

    Ilgauds, Hans-Joachim; Münzel, Gisela

    This paper describes the works of Heinrich Bruns, director of the Leipzig University Observatory, on theoretical geometrical optics, which followed an outstanding tradition in Leipzig. Bruns and his pupils did not stop at theoretical considerations, but applied their findings to practical questions. Bruns' correspondence with opticians of the Zeiss Company in Jena, so far known only fragmentarily, gives impressive evidence of their friendly relationship characterized by mutual regard and stimulation.

  8. BOOK REVIEW: The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    NASA Astrophysics Data System (ADS)

    Ng, J.; Kingsbury, N. G.

    2004-02-01

    This book provides an overview of the theory and practice of continuous and discrete wavelet transforms. Divided into seven chapters, the first three chapters of the book are introductory, describing the various forms of the wavelet transform and their computation, while the remaining chapters are devoted to applications in fluids, engineering, medicine and miscellaneous areas. Each chapter is well introduced, with suitable examples to demonstrate key concepts. Illustrations are included where appropriate, thus adding a visual dimension to the text. A noteworthy feature is the inclusion, at the end of each chapter, of a list of further resources from the academic literature which the interested reader can consult. The first chapter is purely an introduction to the text. The treatment of wavelet transforms begins in the second chapter, with the definition of what a wavelet is. The chapter continues by defining the continuous wavelet transform and its inverse and a description of how it may be used to interrogate signals. The continuous wavelet transform is then compared to the short-time Fourier transform. Energy and power spectra with respect to scale are also discussed and linked to their frequency counterparts. Towards the end of the chapter, the two-dimensional continuous wavelet transform is introduced. Examples of how the continuous wavelet transform is computed using the Mexican hat and Morlet wavelets are provided throughout. The third chapter introduces the discrete wavelet transform, with its distinction from the discretized continuous wavelet transform having been made clear at the end of the second chapter. In the first half of the chapter, the logarithmic discretization of the wavelet function is described, leading to a discussion of dyadic grid scaling, frames, orthogonal and orthonormal bases, scaling functions and multiresolution representation. The fast wavelet transform is introduced and its computation is illustrated with an example using the Haar

  9. Inelastic processes in atomic collisions involving ground state and laser-prepared atoms

    NASA Astrophysics Data System (ADS)

    Planje, Willem Gilles

    1999-11-01

    In dit proefschrift worden experimenten beschreven waarbij ionen of atomen met een bepaalde snelheid op een ensemble van doelwitatomen worden gericht. Wanneer twee deeltjes elkaar voldoende genaderd hebben, vindt er wissel- werking plaats waarbij allerlei processen kunnen optreden. Deze processen resulteren in specieke eindproducten. Kennis over de interactie tussen twee botsingspartners wordt verkregen door te bekijken welke eindproducten ontstaan, en in welke mate. Een belangrijke grootheid die van invloed is op mogelijke processen is de onderlinge snelheid van de twee kernen, oftewel de botsingssnelheid. Wanneer de botsingssnelheid voldoende klein is dan kunnen de verschillende reactiemechanismen zowel kwalitatief als kwanti- tatief vaak goed voorspeld worden door het systeem te beschouwen als een kort-stondig molecuul, opgebouwd uit de twee botsende deeltjes. De ver- schillende processen die kunnen optreden worden gekwaliceerd afhankelijk van de vorming van bepaalde eindproducten. Ruwweg de volgende indeling kan gemaakt worden: 1. de interne structuur van de eindproducten zijn identiek aan die van de beginproducten. We spreken dan van een elastische botsing. 2. e en van de deeltjes of beiden worden in een aangeslagen toestand ge- bracht (of ge¨oniseerd). Dit zijn processen waarbij de herschikte elek- tronen zich bij de oorspronkelijke kern bevinden. We spreken dan van excitatie of ionisatie. 3. e en of meerdere elektronen bevinden zich bij de andere kern na de botsing (eventueel in aangeslagen toestand). We spreken dan van elek- tronenoverdracht. In het eerste deel van deze dissertatie worden botsingsexperimenten tussen heliumionen en natriumatomen beschreven waarbij het proces van elek- tronenoverdracht wordt onderzocht. Bij dit mechanisme is het buitenste 117?Samenvatting natriumelektron betrokken. Deze kan relatief gemakkelijk `overspringen' naar het heliumion wanneer deze zich dicht in de buurt van het natrium- atoom bevindt. Het elektron kan hierbij een

  10. Scatterometry

    NASA Astrophysics Data System (ADS)

    Stoffelen, Adrianus Cornelis Maria

    1996-10-01

    Een veelheid aan meteorologische metingen is dagelijks beschikbaar. De meeste van deze waarnemingen bevinden zich echter boven land, en met name windwaarnemingen boven de (Noord Atlantische) oceaan zijn schaars. Bij een westelijke luchtstroming is dit een duidelijke beperking voor de weers- en golfverwachtingen ten behoeve van Nederland. Juist dan is het gevaar voor bijvoorbeeld storm of overstroming het grootst. Ook in het aardse klimaatsysteem speelt de wind aan het oppervlak een grote rol en is de belangrijkste factor voor de aandrijving van de oceaancirculatie. De oceaancirculatie op zijn beurt is cruciaal voor de verschijnselen die samenhangen met bijvoorbeeld El Niño. Dit proefschift gaat over het scatterometer instrument dat vanuit de ruimte, zelfs onder een wolkendek, nauwkeurige en betrouwbare informatie geeft over de wind aan het oceaanoppervlak met een hoge mate van ruimtelijke consistentie. Tijdens de tweede wereldoorlog werden radars aan boord van schepen veelvuldig gebruikt voor de opsporing van vijandige vaartuigen. Hierbij werd vastgesteld dat de detectie slechter werd naarmate de wind aan het zeeoppervlak groter was. Proefondervindelijk was hiermee het principe van een wind scatterometer aangetoond. Al snel ontwikkelde zich dan ook de idee de wind aan het zeeoppervlak te meten met behulp van radar. Vanuit een vliegtuig of een satelliet word dan een microgolfbundel onder een schuine hoek naar het zeeoppervlak gestuurd. De microgolfstraling, met gewoonlijk een golflengte van enkele centimeters, wordt verstrooid aan het ruwe oppervlak, en een klein gedeelte van de uitgezonden puls keert terug naar het detectorgedeelte van de scatterometer. Het fysische fenomeen van belang voor de werking van de scatterometer is de aanwezigheid van zogeheten capillaire gavitatiegolven op het zeeoppervlak. Deze golven hebben een golflengte van enkele centimeters en reageren vrijwel instantaan op de sterkte van de wind. De verstrooiing van microgolven is op zijn beurt