Sample records for box counting algorithm

  1. Improving the signal subtle feature extraction performance based on dual improved fractal box dimension eigenvectors

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong

    2018-05-01

    Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.

  2. Edge detection of optical subaperture image based on improved differential box-counting method

    NASA Astrophysics Data System (ADS)

    Li, Yi; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-01-01

    Optical synthetic aperture imaging technology is an effective approach to improve imaging resolution. Compared with monolithic mirror system, the image of optical synthetic aperture system is often more complex at the edge, and as a result of the existence of gap between segments, which makes stitching becomes a difficult problem. So it is necessary to extract the edge of subaperture image for achieving effective stitching. Fractal dimension as a measure feature can describe image surface texture characteristics, which provides a new approach for edge detection. In our research, an improved differential box-counting method is used to calculate fractal dimension of image, then the obtained fractal dimension is mapped to grayscale image to detect edges. Compared with original differential box-counting method, this method has two improvements as follows: by modifying the box-counting mechanism, a box with a fixed height is replaced by a box with adaptive height, which solves the problem of over-counting the number of boxes covering image intensity surface; an image reconstruction method based on super-resolution convolutional neural network is used to enlarge small size image, which can solve the problem that fractal dimension can't be calculated accurately under the small size image, and this method may well maintain scale invariability of fractal dimension. The experimental results show that the proposed algorithm can effectively eliminate noise and has a lower false detection rate compared with the traditional edge detection algorithms. In addition, this algorithm can maintain the integrity and continuity of image edge in the case of retaining important edge information.

  3. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  4. The association between reconstructed phase space and Artificial Neural Networks for vectorcardiographic recognition of myocardial infarction.

    PubMed

    Costa, Cecília M; Silva, Ittalo S; de Sousa, Rafael D; Hortegal, Renato A; Regis, Carlos Danilo M

    Myocardial infarction is one of the leading causes of death worldwide. As it is life threatening, it requires an immediate and precise treatment. Due to this, a growing number of research and innovations in the field of biomedical signal processing is in high demand. This paper proposes the association of Reconstructed Phase Space and Artificial Neural Networks for Vectorcardiography Myocardial Infarction Recognition. The algorithm promotes better results for the box size 10 × 10 and the combination of four parameters: box counting (Vx), box counting (Vz), self-similarity method (Vx) and self-similarity method (Vy) with sensitivity = 92%, specificity = 96% and accuracy = 94%. The topographic diagnosis presented different performances for different types of infarctions with better results for anterior wall infarctions and less accurate results for inferior infarctions. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Analysis of Texture Using the Fractal Model

    NASA Technical Reports Server (NTRS)

    Navas, William; Espinosa, Ramon Vasquez

    1997-01-01

    Properties such as the fractal dimension (FD) can be used for feature extraction and classification of regions within an image. The FD measures the degree of roughness of a surface, so this number is used to characterize a particular region, in order to differentiate it from another. There are two basic approaches discussed in the literature to measure FD: the blanket method, and the box counting method. Both attempt to measure FD by estimating the change in surface area with respect to the change in resolution. We tested both methods but box counting resulted computationally faster and gave better results. Differential Box Counting (DBC) was used to segment a collage containing three textures. The FD is independent of directionality and brightness so five features were used derived from the original image to account for directionality and gray level biases. FD can not be measured on a point, so we use a window that slides across the image giving values of FD to the pixel on the center of the window. Windowing blurs the boundaries of adjacent classes, so an edge-preserving, feature-smoothing algorithm is used to improve classification within segments and to make the boundaries sharper. Segmentation using DBC was 90.8910 accurate.

  6. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    NASA Astrophysics Data System (ADS)

    Stephenson, W. Kirk

    2009-08-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes.

  7. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    ERIC Educational Resources Information Center

    Stephenson, W. Kirk

    2009-01-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes. (Contains 4 notes.)

  8. 7 CFR 51.1310 - Sizing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... package. The number of pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth inch for counts...

  9. 7 CFR 51.1310 - Sizing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth inch for counts 135 to 180, inclusive; and three...

  10. 7 CFR 51.1310 - Sizing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth inch for counts 135 to 180, inclusive; and three...

  11. 7 CFR 51.1269 - Sizing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth inch for counts 135 to 180, inclusive; and...

  12. 7 CFR 51.1310 - Sizing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... package. The number of pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth inch for counts...

  13. 7 CFR 51.1269 - Sizing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth inch for counts 135 to 180, inclusive; and...

  14. Local box-counting dimensions of discrete quantum eigenvalue spectra: Analytical connection to quantum spectral statistics

    NASA Astrophysics Data System (ADS)

    Sakhr, Jamal; Nieminen, John M.

    2018-03-01

    Two decades ago, Wang and Ong, [Phys. Rev. A 55, 1522 (1997)], 10.1103/PhysRevA.55.1522 hypothesized that the local box-counting dimension of a discrete quantum spectrum should depend exclusively on the nearest-neighbor spacing distribution (NNSD) of the spectrum. In this Rapid Communication, we validate their hypothesis by deriving an explicit formula for the local box-counting dimension of a countably-infinite discrete quantum spectrum. This formula expresses the local box-counting dimension of a spectrum in terms of single and double integrals of the NNSD of the spectrum. As applications, we derive an analytical formula for Poisson spectra and closed-form approximations to the local box-counting dimension for spectra having Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE), and Gaussian symplectic ensemble (GSE) spacing statistics. In the Poisson and GOE cases, we compare our theoretical formulas with the published numerical data of Wang and Ong and observe excellent agreement between their data and our theory. We also study numerically the local box-counting dimensions of the Riemann zeta function zeros and the alternate levels of GOE spectra, which are often used as numerical models of spectra possessing GUE and GSE spacing statistics, respectively. In each case, the corresponding theoretical formula is found to accurately describe the numerically computed local box-counting dimension.

  15. Swarm Observations: Implementing Integration Theory to Understand an Opponent Swarm

    DTIC Science & Technology

    2012-09-01

    80 Figure 14 Box counts and local dimension plots for the “Rally” scenario. .....................81 Figure...88 Figure 21 Spatial entropy over time for the “Avoid” scenario.........................................89 Figure 22 Box counts and local...96 Figure 27 Spatial entropy over time for the “Rally-integration” scenario. ......................97 Figure 28 Box counts and

  16. Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.

    2009-10-01

    This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.

  17. Pre- and post-experimental manipulation assessments confirm the increase in number of birds due to the addition of nest boxes.

    PubMed

    Cuatianquiz Lima, Cecilia; Macías Garcia, Constantino

    2016-01-01

    Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January-October 2009-2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity.

  18. QuantiFly: Robust Trainable Software for Automated Drosophila Egg Counting.

    PubMed

    Waithe, Dominic; Rennert, Peter; Brostow, Gabriel; Piper, Matthew D W

    2015-01-01

    We report the development and testing of software called QuantiFly: an automated tool to quantify Drosophila egg laying. Many laboratories count Drosophila eggs as a marker of fitness. The existing method requires laboratory researchers to count eggs manually while looking down a microscope. This technique is both time-consuming and tedious, especially when experiments require daily counts of hundreds of vials. The basis of the QuantiFly software is an algorithm which applies and improves upon an existing advanced pattern recognition and machine-learning routine. The accuracy of the baseline algorithm is additionally increased in this study through correction of bias observed in the algorithm output. The QuantiFly software, which includes the refined algorithm, has been designed to be immediately accessible to scientists through an intuitive and responsive user-friendly graphical interface. The software is also open-source, self-contained, has no dependencies and is easily installed (https://github.com/dwaithe/quantifly). Compared to manual egg counts made from digital images, QuantiFly achieved average accuracies of 94% and 85% for eggs laid on transparent (defined) and opaque (yeast-based) fly media. Thus, the software is capable of detecting experimental differences in most experimental situations. Significantly, the advanced feature recognition capabilities of the software proved to be robust to food surface artefacts like bubbles and crevices. The user experience involves image acquisition, algorithm training by labelling a subset of eggs in images of some of the vials, followed by a batch analysis mode in which new images are automatically assessed for egg numbers. Initial training typically requires approximately 10 minutes, while subsequent image evaluation by the software is performed in just a few seconds. Given the average time per vial for manual counting is approximately 40 seconds, our software introduces a timesaving advantage for experiments starting with as few as 20 vials. We also describe an optional acrylic box to be used as a digital camera mount and to provide controlled lighting during image acquisition which will guarantee the conditions used in this study.

  19. QuantiFly: Robust Trainable Software for Automated Drosophila Egg Counting

    PubMed Central

    Waithe, Dominic; Rennert, Peter; Brostow, Gabriel; Piper, Matthew D. W.

    2015-01-01

    We report the development and testing of software called QuantiFly: an automated tool to quantify Drosophila egg laying. Many laboratories count Drosophila eggs as a marker of fitness. The existing method requires laboratory researchers to count eggs manually while looking down a microscope. This technique is both time-consuming and tedious, especially when experiments require daily counts of hundreds of vials. The basis of the QuantiFly software is an algorithm which applies and improves upon an existing advanced pattern recognition and machine-learning routine. The accuracy of the baseline algorithm is additionally increased in this study through correction of bias observed in the algorithm output. The QuantiFly software, which includes the refined algorithm, has been designed to be immediately accessible to scientists through an intuitive and responsive user-friendly graphical interface. The software is also open-source, self-contained, has no dependencies and is easily installed (https://github.com/dwaithe/quantifly). Compared to manual egg counts made from digital images, QuantiFly achieved average accuracies of 94% and 85% for eggs laid on transparent (defined) and opaque (yeast-based) fly media. Thus, the software is capable of detecting experimental differences in most experimental situations. Significantly, the advanced feature recognition capabilities of the software proved to be robust to food surface artefacts like bubbles and crevices. The user experience involves image acquisition, algorithm training by labelling a subset of eggs in images of some of the vials, followed by a batch analysis mode in which new images are automatically assessed for egg numbers. Initial training typically requires approximately 10 minutes, while subsequent image evaluation by the software is performed in just a few seconds. Given the average time per vial for manual counting is approximately 40 seconds, our software introduces a timesaving advantage for experiments starting with as few as 20 vials. We also describe an optional acrylic box to be used as a digital camera mount and to provide controlled lighting during image acquisition which will guarantee the conditions used in this study. PMID:25992957

  20. Box-Counting Dimension Revisited: Presenting an Efficient Method of Minimizing Quantization Error and an Assessment of the Self-Similarity of Structural Root Systems

    PubMed Central

    Bouda, Martin; Caplan, Joshua S.; Saiers, James E.

    2016-01-01

    Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073

  1. 7 CFR 51.1269 - Sizing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... indicated on the package. The number of pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth...

  2. 7 CFR 51.1269 - Sizing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... indicated on the package. The number of pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth...

  3. 25 CFR 542.21 - What are the minimum internal control standards for drop and count for Tier A gaming operations?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... locked table game drop boxes shall be removed from the tables by a person independent of the pit shift... boxes shall be performed by a minimum of two persons, at least one of whom is independent of the pit... count team shall be independent of transactions being reviewed and counted. The count team shall be...

  4. 25 CFR 542.41 - What are the minimum internal control standards for drop and count for Tier C gaming operations?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... locked table game drop boxes shall be removed from the tables by a person independent of the pit shift... boxes shall be performed by a minimum of two persons, at least one of whom is independent of the pit.... (4) The count team shall be independent of transactions being reviewed and counted. The count team...

  5. Pre- and post-experimental manipulation assessments confirm the increase in number of birds due to the addition of nest boxes

    PubMed Central

    Cuatianquiz Lima, Cecilia

    2016-01-01

    Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January–October 2009–2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity. PMID:26998410

  6. Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.

    2013-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to higher number of dimensions. Easy integration with other applications by using the very simple comma separated values file format for storing multi-dimensional images. Implementation of χ2 test as a criterion for deciding whether an object is fractal or not. User friendly graphical interface. Hyper-Fractal Analysis-Test on the Sierpinski hypertetrahedron 4D gasket (Df=ln(5)/ln(2)≅2.32). Running time: In a first approximation, the algorithm is linear [2]. References: [1] V. Grossu, D. Felea, C. Besliu, Al. Jipa, C.C. Bordeianu, E. Stan, T. Esanu, Computer Physics Communications, 181 (2010) 831-832. [2] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C. C. Bordeianu, D. Felea, Computer Physics Communications, 180 (2009) 1999-2001. [3] J. Ruiz de Miras, J. Navas, P. Villoslada, F.J. Esteban, Computer Methods and Programs in Biomedicine, 104 Issue 3 (2011) 452-460.

  7. Detection and classification of Breast Cancer in Wavelet Sub-bands of Fractal Segmented Cancerous Zones.

    PubMed

    Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri

    2015-01-01

    Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and utilized in radial basis function neural network. Our simulation results indicate the accuracy of 92% classification using F1W2 method.

  8. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist.

    PubMed

    Terwee, Caroline B; Mokkink, Lidwine B; Knol, Dirk L; Ostelo, Raymond W J G; Bouter, Lex M; de Vet, Henrica C W

    2012-05-01

    The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a scoring system for the COSMIN checklist to calculate quality scores per measurement property when using the checklist in systematic reviews of measurement properties. The scoring system was developed based on discussions among experts and testing of the scoring system on 46 articles from a systematic review. Four response options were defined for each COSMIN item (excellent, good, fair, and poor). A quality score per measurement property is obtained by taking the lowest rating of any item in a box ("worst score counts"). Specific criteria for excellent, good, fair, and poor quality for each COSMIN item are described. In defining the criteria, the "worst score counts" algorithm was taken into consideration. This means that only fatal flaws were defined as poor quality. The scores of the 46 articles show how the scoring system can be used to provide an overview of the methodological quality of studies included in a systematic review of measurement properties. Based on experience in testing this scoring system on 46 articles, the COSMIN checklist with the proposed scoring system seems to be a useful tool for assessing the methodological quality of studies included in systematic reviews of measurement properties.

  9. 25 CFR 542.31 - What are the minimum internal control standards for drop and count for Tier B gaming operations?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... locked table game drop boxes shall be removed from the tables by a person independent of the pit shift... boxes shall be performed by a minimum of two persons, at least one of whom is independent of the pit... digital record, within seven (7) days by an employee independent of the count. (ii) [Reserved] (2) Count...

  10. BOX-COUNTING DIMENSION COMPUTED BY α-DENSE CURVES

    NASA Astrophysics Data System (ADS)

    García, G.; Mora, G.; Redtwitz, D. A.

    We introduce a method to reduce to the real case the calculus of the box-counting dimension of subsets of the unit cube In, n > 1. The procedure is based on the existence of special types of α-dense curves (a generalization of the space-filling curves) in In called δ-uniform curves.

  11. Hematology of the Pascagoula map turtle (Graptemys gibbonsi) and the southeast Asian box turtle (Cuora amboinensis).

    PubMed

    Perpiñán, David; Hernandez-Divers, Sonia M; Latimer, Kenneth S; Akre, Thomas; Hagen, Chris; Buhlmann, Kurt A; Hernandez-Divers, Stephen J

    2008-09-01

    Turtle populations are decreasing dramatically due to habitat loss and collection for the food and pet market. This study sought to determine hematologic values in two species of turtles to help assess health status of captive and wild populations. Blood samples were collected from 12 individuals of the Pascagoula map turtle (Graptemys gibbonsi) and seven individuals of the southeast Asian box turtle (Cuora amboinensis) from the Savannah River Ecology Laboratory (South Carolina, USA). The hematologic data included hematocrit, total solids, erythrocyte count, leukocyte count, and differential and percentage leukocyte counts. Low hematocrit values and high basophil counts were found in both species. The basophil was the most abundant leukocyte in the Pascagoula map turtle (median = 0.80 x 10(9)/L), whereas in the Southeast Asian box turtle the most abundant leukocyte was the heterophil (median = 2.06 x 10(9)/L).

  12. On the box-counting dimension of the potential singular set for suitable weak solutions to the 3D Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Wang, Yanqing; Wu, Gang

    2017-05-01

    In this paper, we are concerned with the upper box-counting dimension of the set of possible singular points in the space-time of suitable weak solutions to the 3D Navier-Stokes equations. By taking full advantage of the pressure \\Pi in terms of \

  13. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    PubMed

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  14. Clustering box office movie with Partition Around Medoids (PAM) Algorithm based on Text Mining of Indonesian subtitle

    NASA Astrophysics Data System (ADS)

    Alfarizy, A. D.; Indahwati; Sartono, B.

    2017-03-01

    Indonesia is the largest Hollywood movie industry target market in Southeast Asia in 2015. Hollywood movies distributed in Indonesia targeted people in all range of ages including children. Low awareness of guiding children while watching movies make them could watch any rated films even the unsuitable ones for their ages. Even after being translated into Bahasa and passed the censorship phase, words that uncomfortable for children to watch still exist. The purpose of this research is to cluster box office Hollywood movies based on Indonesian subtitle, revenue, IMDb user rating and genres as one of the reference for adults to choose right movies for their children to watch. Text mining is used to extract words from the subtitles and count the frequency for three group of words (bad words, sexual words and terror words), while Partition Around Medoids (PAM) Algorithm with Gower similarity coefficient as proximity matrix is used as clustering method. We clustered 624 movies from 2006 until first half of 2016 from IMDb. Cluster with highest silhouette coefficient value (0.36) is the one with 5 clusters. Animation, Adventure and Comedy movies with high revenue like in cluster 5 is recommended for children to watch, while Comedy movies with high revenue like in cluster 4 should be avoided to watch.

  15. Low Power S-Box Architecture for AES Algorithm using Programmable Second Order Reversible Cellular Automata: An Application to WBAN.

    PubMed

    Gangadari, Bhoopal Rao; Ahamed, Shaik Rafi

    2016-12-01

    In this paper, we presented a novel approach of low energy consumption architecture of S-Box used in Advanced Encryption Standard (AES) algorithm using programmable second order reversible cellular automata (RCA 2 ). The architecture entails a low power implementation with minimal delay overhead and the performance of proposed RCA 2 based S-Box in terms of security is evaluated using the cryptographic properties such as nonlinearity, correlation immunity bias, strict avalanche criteria, entropy and also found that the proposed architecture is secure enough for cryptographic applications. Moreover, the proposed AES algorithm architecture simulation studies show that energy consumption of 68.726 nJ, power dissipation of 3.856 mW for 0.18- μm at 13.69 MHz and energy consumption of 29.408 nJ, power dissipation of 1.65 mW for 0.13- μm at 13.69 MHz. The proposed AES algorithm with RCA 2 based S-Box shows a reduction power consumption by 50 % and energy consumption by 5 % compared to best classical S-Box and composite field arithmetic based AES algorithm. Apart from that, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible, low power dissipation compared to that of LUT based S-Box and hence suitable for Wireless Body Area Network (WBAN) applications.

  16. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications.

    PubMed

    Gangadari, Bhoopal Rao; Rafi Ahamed, Shaik

    2016-09-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA 2 ) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA 2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA 2 based S-Box have comparatively better performance than that of conventional LUT based S-Box.

  17. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications

    PubMed Central

    Rafi Ahamed, Shaik

    2016-01-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA2) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA2 based S-Box have comparatively better performance than that of conventional LUT based S-Box. PMID:27733924

  18. Performance assessment of methods for estimation of fractal dimension from scanning electron microscope images.

    PubMed

    Risović, Dubravko; Pavlović, Zivko

    2013-01-01

    Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.

  19. 7 CFR 51.1269 - Sizing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FRESH FRUITS, VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States... half boxes or special half boxes packed three tiers deep, the pears shall not vary more than three... special half boxes packed two tiers deep, the pears shall not vary more than three-eighths inch for counts...

  20. Effect of the Intelligent Health Messenger Box on health care professionals' knowledge, attitudes, and practice related to hand hygiene and hand bacteria counts.

    PubMed

    Saffari, Mohsen; Ghanizadeh, Ghader; Fattahipour, Rasoul; Khalaji, Kazem; Pakpour, Amir H; Koenig, Harold G

    2016-12-01

    We assessed the effectiveness of the Intelligent Health Messenger Box in promoting hand hygiene using a quasiexperimental design. Knowledge, attitudes, and self-reported practices related to hand hygiene as well as hand bacteria counts and amount of liquid soap used were measured. The intervention involved broadcasting preventive audio messages. All outcomes showed significant change after the intervention compared with before. The Intelligent Health Messenger Box can serve as a practical way to improve hand hygiene. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  1. Evaluation of seven aquatic sampling methods for amphibians and other aquatic fauna

    USGS Publications Warehouse

    Gunzburger, M.S.

    2007-01-01

    To design effective and efficient research and monitoring programs researchers must have a thorough understanding of the capabilities and limitations of their sampling methods. Few direct comparative studies exist for aquatic sampling methods for amphibians. The objective of this study was to simultaneously employ seven aquatic sampling methods in 10 wetlands to compare amphibian species richness and number of individuals detected with each method. Four sampling methods allowed counts of individuals (metal dipnet, D-frame dipnet, box trap, crayfish trap), whereas the other three methods allowed detection of species (visual encounter, aural, and froglogger). Amphibian species richness was greatest with froglogger, box trap, and aural samples. For anuran species, the sampling methods by which each life stage was detected was related to relative length of larval and breeding periods and tadpole size. Detection probability of amphibians varied across sampling methods. Box trap sampling resulted in the most precise amphibian count, but the precision of all four count-based methods was low (coefficient of variation > 145 for all methods). The efficacy of the four count sampling methods at sampling fish and aquatic invertebrates was also analyzed because these predatory taxa are known to be important predictors of amphibian habitat distribution. Species richness and counts were similar for fish with the four methods, whereas invertebrate species richness and counts were greatest in box traps. An effective wetland amphibian monitoring program in the southeastern United States should include multiple sampling methods to obtain the most accurate assessment of species community composition at each site. The combined use of frogloggers, crayfish traps, and dipnets may be the most efficient and effective amphibian monitoring protocol. ?? 2007 Brill Academic Publishers.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blansett, Ethan L.; Schroeppel, Richard Crabtree; Tang, Jason D.

    With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in themore » photonic domain to achieve the requisite encryption rates. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines two classes of all optical logic (SEED, gain competition) and how each discrete logic element can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of the SEED and gain competition devices in an optical circuit were modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model of the SEED or gain competition device takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. We found that a low gate count, cascadable encryption algorithm is most feasible given device and processing constraints. The modeling and simulation of optical designs using these components is proceeding in parallel with efforts to perfect the physical devices and their interconnect. We have applied these techniques to the development of a 'toy' algorithm that may pave the way for more robust optical algorithms. These design/modeling/simulation techniques are now ready to be applied to larger optical designs in advance of our ability to implement such systems in hardware.« less

  3. A Novel Image Encryption Based on Algebraic S-box and Arnold Transform

    NASA Astrophysics Data System (ADS)

    Farwa, Shabieh; Muhammad, Nazeer; Shah, Tariq; Ahmad, Sohail

    2017-09-01

    Recent study shows that substitution box (S-box) only cannot be reliably used in image encryption techniques. We, in this paper, propose a novel and secure image encryption scheme that utilizes the combined effect of an algebraic substitution box along with the scrambling effect of the Arnold transform. The underlying algorithm involves the application of S-box, which is the most imperative source to create confusion and diffusion in the data. The speciality of the proposed algorithm lies, firstly, in the high sensitivity of our S-box to the choice of the initial conditions which makes this S-box stronger than the chaos-based S-boxes as it saves computational labour by deploying a comparatively simple and direct approach based on the algebraic structure of the multiplicative cyclic group of the Galois field. Secondly the proposed method becomes more secure by considering a combination of S-box with certain number of iterations of the Arnold transform. The strength of the S-box is examined in terms of various performance indices such as nonlinearity, strict avalanche criterion, bit independence criterion, linear and differential approximation probabilities etc. We prove through the most significant techniques used for the statistical analyses of the encrypted image that our image encryption algorithm satisfies all the necessary criteria to be usefully and reliably implemented in image encryption applications.

  4. Box-Counting Method of 2D Neuronal Image: Method Modification and Quantitative Analysis Demonstrated on Images from the Monkey and Human Brain.

    PubMed

    Rajković, Nemanja; Krstonošić, Bojana; Milošević, Nebojša

    2017-01-01

    This study calls attention to the difference between traditional box-counting method and its modification. The appropriate scaling factor, influence on image size and resolution, and image rotation, as well as different image presentation, are showed on the sample of asymmetrical neurons from the monkey dentate nucleus. The standard BC method and its modification were evaluated on the sample of 2D neuronal images from the human neostriatum. In addition, three box dimensions (which estimate the space-filling property, the shape, complexity, and the irregularity of dendritic tree) were used to evaluate differences in the morphology of type III aspiny neurons between two parts of the neostriatum.

  5. Three-Dimensional Computer-Aided Detection of Microcalcification Clusters in Digital Breast Tomosynthesis.

    PubMed

    Jeong, Ji-Wook; Chae, Seung-Hoon; Chae, Eun Young; Kim, Hak Hee; Choi, Young-Wook; Lee, Sooyeul

    2016-01-01

    We propose computer-aided detection (CADe) algorithm for microcalcification (MC) clusters in reconstructed digital breast tomosynthesis (DBT) images. The algorithm consists of prescreening, MC detection, clustering, and false-positive (FP) reduction steps. The DBT images containing the MC-like objects were enhanced by a multiscale Hessian-based three-dimensional (3D) objectness response function and a connected-component segmentation method was applied to extract the cluster seed objects as potential clustering centers of MCs. Secondly, a signal-to-noise ratio (SNR) enhanced image was also generated to detect the individual MC candidates and prescreen the MC-like objects. Each cluster seed candidate was prescreened by counting neighboring individual MC candidates nearby the cluster seed object according to several microcalcification clustering criteria. As a second step, we introduced bounding boxes for the accepted seed candidate, clustered all the overlapping cubes, and examined. After the FP reduction step, the average number of FPs per case was estimated to be 2.47 per DBT volume with a sensitivity of 83.3%.

  6. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Jason D.; Schroeppel, Richard Crabtree; Robertson, Perry J.

    With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in themore » photonic domain to achieve the requisite encryption rates. This paper documents the innovations and advances of work first detailed in 'Photonic Encryption using All Optical Logic,' [1]. A discussion of underlying concepts can be found in SAND2003-4474. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines S-SEED devices and how discrete logic elements can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of S-SEED devices in an optical circuit was modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. Demonstration circuits show how these logic elements can be used to form NAND, NOR, and XOR functions. This paper also presents functional analysis of a serial, low gate count demonstration algorithm suitable for scrambling/encryption using S-SEED devices.« less

  8. Task 1, Fractal characteristics of drainage patterns observed in the Appalachian Valley and Ridge and Plateau provinces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, T.; Dominic, J.; Halverson, J.

    1996-04-10

    Drainage patterns observed in the Appalachian Valley and Ridge and Plateau provinces exhibit distinctly different patterns. The patterns appear to be controlled by varying influences of local structural and lithologic variability. Drainage patterns in the Valley and Ridge study area can be classified as a combination of dendritic and trellis arrangements. The patterns vary over short distances in both the strike and dip directions. In the Granny Creek area of the Appalachian Plateau drainage patterns are predominantly dendritic. The possibility that these drainage patterns have fractal characteristics was evaluated by box-counting. Results obtained from box counting do not yield amore » well defined fractal regime in either areas. In the Valley and Ridge a space-filling, or random regime (D=2) is observed for boxes with side-lengths of 300 meters and greater. Below 300 meters, large changes in D occur between consecutively smaller box sizes. From side lengths of 300 to 150m, 150 to 75m, and 75 to 38m, D is measured at 1.77, 1.39, and 1.08 respectively. For box sizes less than 38m the fractal dimension is 1 or less. While the l0g-log response of the box counting data is nonlinear and does not define a fractal regime, the curves offer the possibility of characterizing non-fractal patterns. The rate at which D drops outside the random regime correlates to drainage density. D in areas with a smaller density of drainage segments fell toward saturation (D=1) more abruptly. The break-away point from the random regime and the transition to the saturated regime may provide useful information about the relative lengths of stream segments.« less

  9. Gliding Box method applied to trace element distribution of a geochemical data set

    NASA Astrophysics Data System (ADS)

    Paz González, Antonio; Vidal Vázquez, Eva; Rosario García Moreno, M.; Paz Ferreiro, Jorge; Saa Requejo, Antonio; María Tarquis, Ana

    2010-05-01

    The application of fractal theory to process geochemical prospecting data can provide useful information for evaluating mineralization potential. A geochemical survey was carried out in the west area of Coruña province (NW Spain). Major elements and trace elements were determined by standard analytical techniques. It is well known that there are specific elements or arrays of elements, which are associated with specific types of mineralization. Arsenic has been used to evaluate the metallogenetic importance of the studied zone. Moreover, as can be considered as a pathfinder of Au, as these two elements are genetically associated. The main objective of this study was to use multifractal analysis to characterize the distribution of three trace elements, namely Au, As, and Sb. Concerning the local geology, the study area comprises predominantly acid rocks, mainly alkaline and calcalkaline granites, gneiss and migmatites. The most significant structural feature of this zone is the presence of a mylonitic band, with an approximate NE-SW orientation. The data set used in this study comprises 323 samples collected, with standard geochemical criteria, preferentially in the B horizon of the soil. Occasionally where this horizon was not present, samples were collected from the C horizon. Samples were taken in a rectilinear grid. The sampling lines were perpendicular to the NE-SW tectonic structures. Frequency distributions of the studied elements departed from normal. Coefficients of variation ranked as follows: Sb < As < Au. Significant correlation coefficients between Au, Sb, and As were found, even if these were low. The so-called ‘gliding box' algorithm (GB) proposed originally for lacunarity analysis has been extended to multifractal modelling and provides an alternative to the ‘box-counting' method for implementing multifractal analysis. The partitioning method applied in GB algorithm constructs samples by gliding a box of certain size (a) over the grid map in all possible directions. An "up-scaling" partitioning process will begin with a minimum size or area box (amin) up to a certain size less than the total area A. An advantage of the GB method is the large sample size that usually leads to better statistical results on Dq values, particularly for negative values of q. Because this partitioning overlaps, the measure defined on these boxes is not statistically independent and the definition of the measure in the gliding boxes is different. In order to show the advantages of the GB method, spatial distributions of As, Sb, and Au in the studied area were analyzed. We discussed the usefulness of this method to achieve the numerical characterization of anomalies and its differentiation from the background from the available data of the geochemistry survey.

  10. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298

  11. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.

  12. 7 CFR 51.1310 - Sizing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FRESH FRUITS, VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States... or special half boxes packed three tiers deep, the pears shall not vary more than three-eighths inch... boxes packed two tiers deep, the pears shall not vary more than three-eighths inch for counts 50 or less...

  13. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  14. Packing Boxes into Multiple Containers Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Menghani, Deepak; Guha, Anirban

    2016-07-01

    Container loading problems have been studied extensively in the literature and various analytical, heuristic and metaheuristic methods have been proposed. This paper presents two different variants of a genetic algorithm framework for the three-dimensional container loading problem for optimally loading boxes into multiple containers with constraints. The algorithms are designed so that it is easy to incorporate various constraints found in real life problems. The algorithms are tested on data of standard test cases from literature and are found to compare well with the benchmark algorithms in terms of utilization of containers. This, along with the ability to easily incorporate a wide range of practical constraints, makes them attractive for implementation in real life scenarios.

  15. Red Blood Cell Count Automation Using Microscopic Hyperspectral Imaging Technology.

    PubMed

    Li, Qingli; Zhou, Mei; Liu, Hongying; Wang, Yiting; Guo, Fangmin

    2015-12-01

    Red blood cell counts have been proven to be one of the most frequently performed blood tests and are valuable for early diagnosis of some diseases. This paper describes an automated red blood cell counting method based on microscopic hyperspectral imaging technology. Unlike the light microscopy-based red blood count methods, a combined spatial and spectral algorithm is proposed to identify red blood cells by integrating active contour models and automated two-dimensional k-means with spectral angle mapper algorithm. Experimental results show that the proposed algorithm has better performance than spatial based algorithm because the new algorithm can jointly use the spatial and spectral information of blood cells.

  16. BrainIACS: a system for web-based medical image processing

    NASA Astrophysics Data System (ADS)

    Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.

    2009-02-01

    We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.

  17. Recursive algorithms for phylogenetic tree counting.

    PubMed

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  18. A voxel-based technique to estimate the volume of trees from terrestrial laser scanner data

    NASA Astrophysics Data System (ADS)

    Bienert, A.; Hess, C.; Maas, H.-G.; von Oheimb, G.

    2014-06-01

    The precise determination of the volume of standing trees is very important for ecological and economical considerations in forestry. If terrestrial laser scanner data are available, a simple approach for volume determination is given by allocating points into a voxel structure and subsequently counting the filled voxels. Generally, this method will overestimate the volume. The paper presents an improved algorithm to estimate the wood volume of trees using a voxel-based method which will correct for the overestimation. After voxel space transformation, each voxel which contains points is reduced to the volume of its surrounding bounding box. In a next step, occluded (inner stem) voxels are identified by a neighbourhood analysis sweeping in the X and Y direction of each filled voxel. Finally, the wood volume of the tree is composed by the sum of the bounding box volumes of the outer voxels and the volume of all occluded inner voxels. Scan data sets from several young Norway maple trees (Acer platanoides) were used to analyse the algorithm. Therefore, the scanned trees as well as their representing point clouds were separated in different components (stem, branches) to make a meaningful comparison. Two reference measurements were performed for validation: A direct wood volume measurement by placing the tree components into a water tank, and a frustum calculation of small trunk segments by measuring the radii along the trunk. Overall, the results show slightly underestimated volumes (-0.3% for a probe of 13 trees) with a RMSE of 11.6% for the individual tree volume calculated with the new approach.

  19. Letter box line blackener for the HDTV/conventional-analog hybrid system

    DOEpatents

    Wysocki, Frederick J.; Nickel, George H.

    2006-07-18

    A blackener for letter box lines associated with a HDTV/conventional-analog hybrid television transmission where the blackener counts horizontal sync pulses contained in the HDTV/conventional-analog hybrid television transmission and determines when the HDTV/conventional-analog hybrid television transmission is in letter-box lines: if it is, then the blackener sends substitute black signal to an output; and if it is not, then the blackener sends the HDTV/conventional-analog hybrid television transmission to the output.

  20. Linking Satellite-Derived Fire Counts to Satellite-Derived Weather Data in Fire Prediction Models to Forecast Extreme Fires in Siberia

    NASA Astrophysics Data System (ADS)

    Westberg, David; Soja, Amber; Stackhouse, Paul, Jr.

    2010-05-01

    Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Boreal systems contain the largest pool of terrestrial carbon, and Russia holds 2/3 of the global boreal forests. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under climate change scenarios. Meteorological parameters influence fire danger and fire is a catalyst for ecosystem change. Therefore to predict fire weather and ecosystem change, we must understand the factors that influence fire regimes and at what scale these are viable. Our data consists of NASA Langley Research Center (LaRC)-derived fire weather indices (FWI) and National Climatic Data Center (NCDC) surface station-derived FWI on a domain from 50°N-80°N latitude and 70°E-170°W longitude and the fire season from April through October for the years of 1999, 2002, and 2004. Both of these are calculated using the Canadian Forest Service (CFS) FWI, which is based on local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. The large-scale (1°) LaRC product uses NASA Goddard Earth Observing System version 4 (GEOS-4) reanalysis and NASA Global Precipitation Climatology Project (GEOS-4/GPCP) data to calculate FWI. CFS Natural Resources Canada uses Geographic Information Systems (GIS) to interpolate NCDC station data and calculate FWI. We compare the LaRC GEOS- 4/GPCP FWI and CFS NCDC FWI based on their fraction of 1° grid boxes that contain satellite-derived fire counts and area burned to the domain total number of 1° grid boxes with a common FWI category (very low to extreme). These are separated by International Geosphere-Biosphere Programme (IGBP) 1°x1° resolution vegetation types to determine and compare fire regimes in each FWI/ecosystem class and to estimate the fraction of each of the 18 IGBP ecosystems burned, which are dependent on the FWI. On days with fire counts, the domain total of 1°x1° grid boxes with and without daily fire counts and area burned are totaled. The fraction of 1° grid boxes with fire counts and area burned to the total number of 1° grid boxes having common FWI category and vegetation type are accumulated, and a daily mean for the burning season is calculated. The mean fire counts and mean area burned plots appear to be well related. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to assess fire weather danger and fire regimes, so these data can be confidently used to predict future fire regimes using large-scale fire weather data. Specifically, we related large-scale fire weather, area burned, and the amount of fire-induced ecosystem change. Both the LaRC and CFS FWI showed gradual linear increase in fraction of grid boxes with fire counts and area burned with increasing FWI category, with an exponential increase in the higher FWI categories in some cases, for the majority of the vegetation types. Our analysis shows a direct correlation between increased fire activity and increased FWI, independent of time or the severity of the fire season. During normal and extreme fire seasons, we noticed the fraction of fire counts and area burned per 1° grid box increased with increasing FWI rating. Given this analysis, we are confident large-scale weather and climate data, in this case from the GEOS-4 reanalysis and the GPCP data sets, can be used to accurately assess future fire potential. This increases confidence in the ability of large-scale IPCC weather and climate scenarios to predict future fire regimes in boreal regions.

  1. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  2. 78 FR 16749 - Self-Regulatory Organizations; BOX Options Exchange LLC; Notice of Filing of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ...). The Limit Up-Limit Down Plan is designed to prevent executions from occurring outside of dynamic price... count for purposes of calculating whether a Market Maker is fulfilling its obligations for continuous... Organizations; BOX Options Exchange LLC; Notice of Filing of Proposed Rule Change To Clarify How the Exchange...

  3. 25 CFR 90.43 - Canvass of election returns.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... continuously in the room until the ballots are finally counted. One or more judges shall act as official... appearing at the poll; all tally sheets; and all other election materials shall be placed in the ballot box which shall be locked. The supervisor shall then deliver the locked ballot box and keys to same to the...

  4. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    PubMed

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  5. Demonstration of quantum advantage in machine learning

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  6. Architecture and data processing alternatives for the TSE computer. Volume 3: Execution of a parallel counting algorithm using array logic (Tse) devices

    NASA Technical Reports Server (NTRS)

    Metcalfe, A. G.; Bodenheimer, R. E.

    1976-01-01

    A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.

  7. Quantifying parameter uncertainty in stochastic models using the Box Cox transformation

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Kuczera, George; Wang, Q. J.

    2002-08-01

    The Box-Cox transformation is widely used to transform hydrological data to make it approximately Gaussian. Bayesian evaluation of parameter uncertainty in stochastic models using the Box-Cox transformation is hindered by the fact that there is no analytical solution for the posterior distribution. However, the Markov chain Monte Carlo method known as the Metropolis algorithm can be used to simulate the posterior distribution. This method properly accounts for the nonnegativity constraint implicit in the Box-Cox transformation. Nonetheless, a case study using the AR(1) model uncovered a practical problem with the implementation of the Metropolis algorithm. The use of a multivariate Gaussian jump distribution resulted in unacceptable convergence behaviour. This was rectified by developing suitable parameter transformations for the mean and variance of the AR(1) process to remove the strong nonlinear dependencies with the Box-Cox transformation parameter. Applying this methodology to the Sydney annual rainfall data and the Burdekin River annual runoff data illustrates the efficacy of these parameter transformations and demonstrate the value of quantifying parameter uncertainty.

  8. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  9. Research on Collection System Optimal Design of Wind Farm with Obstacles

    NASA Astrophysics Data System (ADS)

    Huang, W.; Yan, B. Y.; Tan, R. S.; Liu, L. F.

    2017-05-01

    To the collection system optimal design of offshore wind farm, the factors considered are not only the reasonable configuration of the cable and switch, but also the influence of the obstacles on the topology design of the offshore wind farm. This paper presents a concrete topology optimization algorithm with obstacles. The minimal area rectangle encasing box of the obstacle is obtained by using the method of minimal area encasing box. Then the optimization algorithm combining the advantages of Dijkstra algorithm and Prim algorithm is used to gain the scheme of avoidance obstacle path planning. Finally a fuzzy comprehensive evaluation model based on the analytic hierarchy process is constructed to compare the performance of the different topologies. Case studies demonstrate the feasibility of the proposed algorithm and model.

  10. Benthic foraminiferal census data from Mobile Bay, Alabama--counts of surface samples and box cores

    USGS Publications Warehouse

    Richwine, Kathryn A.; Osterman, Lisa E.

    2012-01-01

    A study was undertaken in order to understand recent environmental change in Mobile Bay, Alabama. For this study a series of surface sediment and box core samples was collected. The surface benthic foraminiferal data provide the modern baseline conditions of the bay and can be used as a reference for changing paleoenvironmental parameters recorded in the box cores. The 14 sampling locations were chosen in the bay to cover the wide diversity of fluvial and marine-influenced environments on both sides of the shipping channel.

  11. Characteristic extraction and matching algorithms of ballistic missile in near-space by hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi

    2014-10-01

    The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.

  12. A fast algorithm for multiscale electromagnetic problems using interpolative decomposition and multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing

    2012-02-01

    The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.

  13. 16 CFR 500.16 - Measurement of container type commodities, how expressed.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... be used as containers for other materials or objects, such as bags, cups, boxes, and pans, shall be...) For circular or other generally round shaped containers, except cups, and the like, in terms of count... quantity statement for containers such as cups will be listed in terms of count and liquid capacity per...

  14. 16 CFR 500.16 - Measurement of container type commodities, how expressed.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... be used as containers for other materials or objects, such as bags, cups, boxes, and pans, shall be...) For circular or other generally round shaped containers, except cups, and the like, in terms of count... quantity statement for containers such as cups will be listed in terms of count and liquid capacity per...

  15. A Lightweight White-Box Symmetric Encryption Algorithm against Node Capture for WSNs †

    PubMed Central

    Shi, Yang; Wei, Wujing; He, Zongjian

    2015-01-01

    Wireless Sensor Networks (WSNs) are often deployed in hostile environments and, thus, nodes can be potentially captured by an adversary. This is a typical white-box attack context, i.e., the adversary may have total visibility of the implementation of the build-in cryptosystem and full control over its execution platform. Handling white-box attacks in a WSN scenario is a challenging task. Existing encryption algorithms for white-box attack contexts require large memory footprint and, hence, are not applicable for wireless sensor networks scenarios. As a countermeasure against the threat in this context, in this paper, we propose a class of lightweight secure implementations of the symmetric encryption algorithm SMS4. The basic idea of our approach is to merge several steps of the round function of SMS4 into table lookups, blended by randomly generated mixing bijections. Therefore, the size of the implementations are significantly reduced while keeping the same security efficiency. The security and efficiency of the proposed solutions are theoretically analyzed. Evaluation shows our solutions satisfy the requirement of sensor nodes in terms of limited memory size and low computational costs. PMID:26007737

  16. Assessing Rotation-Invariant Feature Classification for Automated Wildebeest Population Counts.

    PubMed

    Torney, Colin J; Dobson, Andrew P; Borner, Felix; Lloyd-Jones, David J; Moyer, David; Maliti, Honori T; Mwita, Machoke; Fredrick, Howard; Borner, Markus; Hopcraft, J Grant C

    2016-01-01

    Accurate and on-demand animal population counts are the holy grail for wildlife conservation organizations throughout the world because they enable fast and responsive adaptive management policies. While the collection of image data from camera traps, satellites, and manned or unmanned aircraft has advanced significantly, the detection and identification of animals within images remains a major bottleneck since counting is primarily conducted by dedicated enumerators or citizen scientists. Recent developments in the field of computer vision suggest a potential resolution to this issue through the use of rotation-invariant object descriptors combined with machine learning algorithms. Here we implement an algorithm to detect and count wildebeest from aerial images collected in the Serengeti National Park in 2009 as part of the biennial wildebeest count. We find that the per image error rates are greater than, but comparable to, two separate human counts. For the total count, the algorithm is more accurate than both manual counts, suggesting that human counters have a tendency to systematically over or under count images. While the accuracy of the algorithm is not yet at an acceptable level for fully automatic counts, our results show this method is a promising avenue for further research and we highlight specific areas where future research should focus in order to develop fast and accurate enumeration of aerial count data. If combined with a bespoke image collection protocol, this approach may yield a fully automated wildebeest count in the near future.

  17. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  18. Characterization of Atrophic Changes in the Cerebral Cortex Using Fractal Dimensional Analysis

    PubMed Central

    George, Anuh T.; Jeon, Tina; Hynan, Linda S.; Youn, Teddy S.; Kennedy, David N.; Dickerson, Bradford

    2010-01-01

    The purpose of this project is to apply a modified fractal analysis technique to high-resolution T1 weighted magnetic resonance images in order to quantify the alterations in the shape of the cerebral cortex that occur in patients with Alzheimer’s disease. Images were selected from the Alzheimer’s Disease Neuroimaging Initiative database (Control N=15, Mild-Moderate AD N=15). The images were segmented using a semi-automated analysis program. Four coronal and three axial profiles of the cerebral cortical ribbon were created. The fractal dimensions (Df) of the cortical ribbons were then computed using a box-counting algorithm. The mean Df of the cortical ribbons from AD patients were lower than age-matched controls on six of seven profiles. The fractal measure has regional variability which reflects local differences in brain structure. Fractal dimension is complementary to volumetric measures and may assist in identifying disease state or disease progression. PMID:20740072

  19. Optimization of Stochastic Response Surfaces Subject to Constraints with Linear Programming

    DTIC Science & Technology

    1992-03-01

    SEXTPT(EPDIM,NVAR), box(STEP,NVAR), SDEV(3) REAL BA-SET(NL,BCDIM,M,NVAR),BA(M,NVAR),CBA(NVAR) REAL CB(M), BMAT (MM),B _TEST(M) COMMON OPTBASIS, OPTEP...0.0) THEN COUNT - COUNT+1 XBASIC(N,SET,COUNT) = I DO 136 J - 1, M BMAT (J,COUNT) = A(J,I) 136 CONTINUE ENDIF 137 CONTINUE IF(COUNT.GT.M) WRITE...SET,I)= 0.0 DO 140 J = 1, M BMAT (J,I) = 0.0 140 CONTINUE 142 CONTINUE DO 148 I= 1, M BTEST(I) = 0.0 64 DO 146 J -1, NVAR BTEST(I)= BTEST(I)+XSOL(J)*A

  20. Clearing a Pile of Unknown Objects using Interactive Perception

    DTIC Science & Technology

    2012-11-01

    blocks and the shampoo . The robot now decides to grasp the bottle of shampoo . Next, the tissue box and the chunk of wood are pushed and grasped. The...20 seconds. Poking an object requires 12 (a) Initial pile (b) Poking macaroni box (c) After poking (d) Grasping shampoo (e) After grasping (f) Pooking...objects: a tissue box, a chunk of wood, a bottle of shampoo , a box of macaroni, and toy blocks. The algorithm switches between pushing to verify

  1. Daily Patterns of Preschoolers' Objectively Measured Step Counts in Six European Countries: Cross-Sectional Results from the ToyBox-Study.

    PubMed

    Van Stappen, Vicky; Van Dyck, Delfien; Latomme, Julie; De Bourdeaudhuij, Ilse; Moreno, Luis; Socha, Piotr; Iotova, Violeta; Koletzko, Berthold; Manios, Yannis; Androutsos, Odysseas; Cardon, Greet; De Craemer, Marieke

    2018-02-07

    This study is part of the ToyBox-study, which is conducted in six European countries (Belgium, Bulgaria, Germany, Greece, Poland and Spain), aiming to develop a cost-effective kindergarten-based, family-involved intervention to prevent overweight and obesity in four- to six-year-old preschool children. In the current study, we aimed to examine and compare preschoolers' step count patterns, across the six European countries. A sample of 3578 preschoolers (mean age: 4.8 ± 0.4) was included. Multilevel analyses were performed to take clustering of measurements into account. Based on the average hourly steps, step count patterns for the six European countries were created for weekdays and weekend days. The step count patterns during weekdays were related to the daily kindergarten schedules. Step count patterns during weekdays showed several significant peaks and troughs ( p < 0.01) and clearly reflected the kindergartens' daily schedules, except for Germany. For example, low numbers of steps were observed during afternoon naptimes and high numbers of steps during recess. In Germany, step count patterns did not show clear peaks and troughs, which can be explained by a less structured kindergarten schedule. On weekend days, differences in step count patterns were observed in the absolute number of steps in the afternoon trough and the period in which the evening peak occurred. Differences in step count patterns across the countries can be explained by differences in (school) policy, lifestyle habits, and culture. Therefore, it might be important to respond to these step count patterns and more specifically to tackle the inactive periods during interventions to promote physical activity in preschoolers.

  2. Daily Patterns of Preschoolers’ Objectively Measured Step Counts in Six European Countries: Cross-Sectional Results from the ToyBox-Study

    PubMed Central

    Van Stappen, Vicky; Latomme, Julie; Moreno, Luis; Socha, Piotr; Iotova, Violeta; Koletzko, Berthold; Manios, Yannis; Androutsos, Odysseas

    2018-01-01

    This study is part of the ToyBox-study, which is conducted in six European countries (Belgium, Bulgaria, Germany, Greece, Poland and Spain), aiming to develop a cost-effective kindergarten-based, family-involved intervention to prevent overweight and obesity in four- to six-year-old preschool children. In the current study, we aimed to examine and compare preschoolers’ step count patterns, across the six European countries. A sample of 3578 preschoolers (mean age: 4.8 ± 0.4) was included. Multilevel analyses were performed to take clustering of measurements into account. Based on the average hourly steps, step count patterns for the six European countries were created for weekdays and weekend days. The step count patterns during weekdays were related to the daily kindergarten schedules. Step count patterns during weekdays showed several significant peaks and troughs (p < 0.01) and clearly reflected the kindergartens’ daily schedules, except for Germany. For example, low numbers of steps were observed during afternoon naptimes and high numbers of steps during recess. In Germany, step count patterns did not show clear peaks and troughs, which can be explained by a less structured kindergarten schedule. On weekend days, differences in step count patterns were observed in the absolute number of steps in the afternoon trough and the period in which the evening peak occurred. Differences in step count patterns across the countries can be explained by differences in (school) policy, lifestyle habits, and culture. Therefore, it might be important to respond to these step count patterns and more specifically to tackle the inactive periods during interventions to promote physical activity in preschoolers. PMID:29414916

  3. A Fast parallel tridiagonal algorithm for a class of CFD applications

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Sun, Xian-He

    1996-01-01

    The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.

  4. The Full Monte Carlo: A Live Performance with Stars

    NASA Astrophysics Data System (ADS)

    Meng, Xiao-Li

    2014-06-01

    Markov chain Monte Carlo (MCMC) is being applied increasingly often in modern Astrostatistics. It is indeed incredibly powerful, but also very dangerous. It is popular because of its apparent generality (from simple to highly complex problems) and simplicity (the availability of out-of-the-box recipes). It is dangerous because it always produces something but there is no surefire way to verify or even diagnosis that the “something” is remotely close to what the MCMC theory predicts or one hopes. Using very simple models (e.g., conditionally Gaussian), this talk starts with a tutorial of the two most popular MCMC algorithms, namely, the Gibbs Sampler and the Metropolis-Hasting Algorithm, and illustratestheir good, bad, and ugly implementations via live demonstration. The talk ends with a story of how a recent advance, the Ancillary-Sufficient Interweaving Strategy (ASIS) (Yu and Meng, 2011, http://www.stat.harvard.edu/Faculty_Content/meng/jcgs.2011-article.pdf)reduces the danger. It was discovered almost by accident during a Ph.D. student’s (Yaming Yu) struggle with fitting a Cox process model for detecting changes in source intensity of photon counts observed by the Chandra X-ray telescope from a (candidate) neutron/quark star.

  5. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    PubMed Central

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2008-01-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt’s macular dystrophy and retinitis pigmentosa. PMID:17429482

  6. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2007-05-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt's macular dystrophy and retinitis pigmentosa.

  7. Automated target classification in high resolution dual frequency sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Fernández, Manuel

    2007-04-01

    An improved computer-aided-detection / computer-aided-classification (CAD/CAC) processing string has been developed. The classified objects of 2 distinct strings are fused using the classification confidence values and their expansions as features, and using "summing" or log-likelihood-ratio-test (LLRT) based fusion rules. The utility of the overall processing strings and their fusion was demonstrated with new high-resolution dual frequency sonar imagery. Three significant fusion algorithm improvements were made. First, a nonlinear 2nd order (Volterra) feature LLRT fusion algorithm was developed. Second, a Box-Cox nonlinear feature LLRT fusion algorithm was developed. The Box-Cox transformation consists of raising the features to a to-be-determined power. Third, a repeated application of a subset feature selection / feature orthogonalization / Volterra feature LLRT fusion block was utilized. It was shown that cascaded Volterra feature LLRT fusion of the CAD/CAC processing strings outperforms summing, baseline single-stage Volterra and Box-Cox feature LLRT algorithms, yielding significant improvements over the best single CAD/CAC processing string results, and providing the capability to correctly call the majority of targets while maintaining a very low false alarm rate. Additionally, the robustness of cascaded Volterra feature fusion was demonstrated, by showing that the algorithm yields similar performance with the training and test sets.

  8. X-ray detection of Nova Del 2013 with Swift

    NASA Astrophysics Data System (ADS)

    Castro-Tirado, Alberto J.; Martin-Carrillo, Antonio; Hanlon, Lorraine

    2013-08-01

    Continuous X-ray monitoring by Swift of Nova Del 2013 (see CBET #3628) shows an increase of X-ray emission at the source location compared to previous observations (ATEL #5283, ATEL #5305) during a 3.9 ksec observation at UT 2013-08-22 12:05. With the XRT instrument operating in window timing mode, 744 counts were extracted from a 50 pixel long source region and 324 counts from a similar box for a background region, resulting in a 13-sigma detection with a net count rate of 0.11±0.008 counts/sec.

  9. The model of encryption algorithm based on non-positional polynomial notations and constructed on an SP-network

    NASA Astrophysics Data System (ADS)

    Kapalova, N.; Haumen, A.

    2018-05-01

    This paper addresses to structures and properties of the cryptographic information protection algorithm model based on NPNs and constructed on an SP-network. The main task of the research is to increase the cryptostrength of the algorithm. In the paper, the transformation resulting in the improvement of the cryptographic strength of the algorithm is described in detail. The proposed model is based on an SP-network. The reasons for using the SP-network in this model are the conversion properties used in these networks. In the encryption process, transformations based on S-boxes and P-boxes are used. It is known that these transformations can withstand cryptanalysis. In addition, in the proposed model, transformations that satisfy the requirements of the "avalanche effect" are used. As a result of this work, a computer program that implements an encryption algorithm model based on the SP-network has been developed.

  10. Higuchi Dimension of Digital Images

    PubMed Central

    Ahammer, Helmut

    2011-01-01

    There exist several methods for calculating the fractal dimension of objects represented as 2D digital images. For example, Box counting, Minkowski dilation or Fourier analysis can be employed. However, there appear to be some limitations. It is not possible to calculate only the fractal dimension of an irregular region of interest in an image or to perform the calculations in a particular direction along a line on an arbitrary angle through the image. The calculations must be made for the whole image. In this paper, a new method to overcome these limitations is proposed. 2D images are appropriately prepared in order to apply 1D signal analyses, originally developed to investigate nonlinear time series. The Higuchi dimension of these 1D signals is calculated using Higuchi's algorithm, and it is shown that both regions of interests and directional dependencies can be evaluated independently of the whole picture. A thorough validation of the proposed technique and a comparison of the new method to the Fourier dimension, a common two dimensional method for digital images, are given. The main result is that Higuchi's algorithm allows a direction dependent as well as direction independent analysis. Actual values for the fractal dimensions are reliable and an effective treatment of regions of interests is possible. Moreover, the proposed method is not restricted to Higuchi's algorithm, as any 1D method of analysis, can be applied. PMID:21931854

  11. Minimum airflow reset of single-duct VAV terminal boxes

    NASA Astrophysics Data System (ADS)

    Cho, Young-Hum

    Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.

  12. Cell type classifiers for breast cancer microscopic images based on fractal dimension texture analysis of image color layers.

    PubMed

    Jitaree, Sirinapa; Phinyomark, Angkoon; Boonyaphiphat, Pleumjit; Phukpattaranont, Pornchai

    2015-01-01

    Having a classifier of cell types in a breast cancer microscopic image (BCMI), obtained with immunohistochemical staining, is required as part of a computer-aided system that counts the cancer cells in such BCMI. Such quantitation by cell counting is very useful in supporting decisions and planning of the medical treatment of breast cancer. This study proposes and evaluates features based on texture analysis by fractal dimension (FD), for the classification of histological structures in a BCMI into either cancer cells or non-cancer cells. The cancer cells include positive cells (PC) and negative cells (NC), while the normal cells comprise stromal cells (SC) and lymphocyte cells (LC). The FD feature values were calculated with the box-counting method from binarized images, obtained by automatic thresholding with Otsu's method of the grayscale images for various color channels. A total of 12 color channels from four color spaces (RGB, CIE-L*a*b*, HSV, and YCbCr) were investigated, and the FD feature values from them were used with decision tree classifiers. The BCMI data consisted of 1,400, 1,200, and 800 images with pixel resolutions 128 × 128, 192 × 192, and 256 × 256, respectively. The best cross-validated classification accuracy was 93.87%, for distinguishing between cancer and non-cancer cells, obtained using the Cr color channel with window size 256. The results indicate that the proposed algorithm, based on fractal dimension features extracted from a color channel, performs well in the automatic classification of the histology in a BCMI. This might support accurate automatic cell counting in a computer-assisted system for breast cancer diagnosis. © Wiley Periodicals, Inc.

  13. Multivariate Spline Algorithms for CAGD

    NASA Technical Reports Server (NTRS)

    Boehm, W.

    1985-01-01

    Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.

  14. A novel algorithm for solving the true coincident counting issues in Monte Carlo simulations for radiation spectroscopy.

    PubMed

    Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A

    2015-06-01

    Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.

  15. Development of a stained cell nuclei counting system

    NASA Astrophysics Data System (ADS)

    Timilsina, Niranjan; Moffatt, Christopher; Okada, Kazunori

    2011-03-01

    This paper presents a novel cell counting system which exploits the Fast Radial Symmetry Transformation (FRST) algorithm [1]. The driving force behind our system is a research on neurogenesis in the intact nervous system of Manduca Sexta or the Tobacco Hornworm, which was being studied to assess the impact of age, food and environment on neurogenesis. The varying thickness of the intact nervous system in this species often yields images with inhomogeneous background and inconsistencies such as varying illumination, variable contrast, and irregular cell size. For automated counting, such inhomogeneity and inconsistencies must be addressed, which no existing work has done successfully. Thus, our goal is to devise a new cell counting algorithm for the images with non-uniform background. Our solution adapts FRST: a computer vision algorithm which is designed to detect points of interest on circular regions such as human eyes. This algorithm enhances the occurrences of the stained-cell nuclei in 2D digital images and negates the problems caused by their inhomogeneity. Besides FRST, our algorithm employs standard image processing methods, such as mathematical morphology and connected component analysis. We have evaluated the developed cell counting system with fourteen digital images of Tobacco Hornworm's nervous system collected for this study with ground-truth cell counts by biology experts. Experimental results show that our system has a minimum error of 1.41% and mean error of 16.68% which is at least forty-four percent better than the algorithm without FRST.

  16. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  17. Automated vehicle counting using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  18. Fractal analysis and its impact factors on pore structure of artificial cores based on the images obtained using magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong

    2012-11-01

    Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).

  19. A comparison between skeleton and bounding box models for falling direction recognition

    NASA Astrophysics Data System (ADS)

    Narupiyakul, Lalita; Srisrisawang, Nitikorn

    2017-12-01

    Falling is an injury that can lead to a serious medical condition in every range of the age of people. However, in the case of elderly, the risk of serious injury is much higher. Due to the fact that one way of preventing serious injury is to treat the fallen person as soon as possible, several works attempted to implement different algorithms to recognize the fall. Our work compares the performance of two models based on features extraction: (i) Body joint data (Skeleton Data) which are the joint's positions in 3 axes and (ii) Bounding box (Box-size Data) covering all body joints. Machine learning algorithms that were chosen are Decision Tree (DT), Naïve Bayes (NB), K-nearest neighbors (KNN), Linear discriminant analysis (LDA), Voting Classification (VC), and Gradient boosting (GB). The results illustrate that the models trained with Skeleton data are performed far better than those trained with Box-size data (with an average accuracy of 94-81% and 80-75%, respectively). KNN shows the best performance in both Body joint model and Bounding box model. In conclusion, KNN with Body joint model performs the best among the others.

  20. Feasibility study of extremity dosemeter based on polyallyldiglycolcarbonate (CR-39) for neutron exposure.

    PubMed

    Chau, Q; Bruguier, P

    2007-01-01

    In nuclear facilities, some activities such as reprocessing, recycling and production of bare fuel rods expose the workers to mixed neutron-photon fields. For several workplaces, particularly in glove boxes, some workers expose their hands to mixed fields. The mastery of the photon extremity dosimetry is relatively good, whereas the neutron dosimetry still raises difficulties. In this context, the Institute for Radiological Protection and Nuclear Safety (IRSN) has proposed a study on a passive neutron extremity dosemeter based on chemically etched CR-39 (PADC: polyallyldiglycolcarbonate), named PN-3, already used in routine practice for whole body dosimetry. This dosemeter is a chip of plastic sensitive to recoil protons. The chemical etching process amplifies the size of the impact. The reading system for tracks counting is composed of a microscope, a video camera and an image analyser. This system is combined with the dose evaluation algorithm. The performance of the dosemeter PN-3 has been largely studied and proved by several laboratories in terms of passive individual neutron dosemeter which is used in routine production by different companies. This study focuses on the sensitivity of the extremity dosemeter, as well as its performance in the function of the level of the neutron energy. The dosemeter was exposed to monoenergetic neutron fields in laboratory conditions and to mixed fields in glove boxes at workplaces.

  1. Learning Extended Finite State Machines

    NASA Technical Reports Server (NTRS)

    Cassel, Sofia; Howar, Falk; Jonsson, Bengt; Steffen, Bernhard

    2014-01-01

    We present an active learning algorithm for inferring extended finite state machines (EFSM)s, combining data flow and control behavior. Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses the tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library.

  2. Validation of accelerometer wear and nonwear time classification algorithm.

    PubMed

    Choi, Leena; Liu, Zhouwen; Matthews, Charles E; Buchowski, Maciej S

    2011-02-01

    the use of movement monitors (accelerometers) for measuring physical activity (PA) in intervention and population-based studies is becoming a standard methodology for the objective measurement of sedentary and active behaviors and for the validation of subjective PA self-reports. A vital step in PA measurement is the classification of daily time into accelerometer wear and nonwear intervals using its recordings (counts) and an accelerometer-specific algorithm. the purpose of this study was to validate and improve a commonly used algorithm for classifying accelerometer wear and nonwear time intervals using objective movement data obtained in the whole-room indirect calorimeter. we conducted a validation study of a wear or nonwear automatic algorithm using data obtained from 49 adults and 76 youth wearing accelerometers during a strictly monitored 24-h stay in a room calorimeter. The accelerometer wear and nonwear time classified by the algorithm was compared with actual wearing time. Potential improvements to the algorithm were examined using the minimum classification error as an optimization target. the recommended elements in the new algorithm are as follows: 1) zero-count threshold during a nonwear time interval, 2) 90-min time window for consecutive zero or nonzero counts, and 3) allowance of 2-min interval of nonzero counts with the upstream or downstream 30-min consecutive zero-count window for detection of artifactual movements. Compared with the true wearing status, improvements to the algorithm decreased nonwear time misclassification during the waking and the 24-h periods (all P values < 0.001). the accelerometer wear or nonwear time algorithm improvements may lead to more accurate estimation of time spent in sedentary and active behaviors.

  3. Keep Counting Those Boxes--There's More.

    ERIC Educational Resources Information Center

    Mingus, Tabitha T. Y.; Grassl, Richard M.

    1998-01-01

    Poses and solves several related extensions involving enumerating squares and rectangles. Describes how problem extensions can be developed and used in the classroom to motivate and challenge teachers and students to exert themselves mathematically. (ASK)

  4. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization.

    PubMed

    Jiang, Ailian; Zheng, Lihong

    2018-03-29

    Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.

  5. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization

    PubMed Central

    2018-01-01

    Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. PMID:29596336

  6. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    PubMed

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  7. Counting in Lattices: Combinatorial Problems from Statistical Mechanics.

    NASA Astrophysics Data System (ADS)

    Randall, Dana Jill

    In this thesis we consider two classical combinatorial problems arising in statistical mechanics: counting matchings and self-avoiding walks in lattice graphs. The first problem arises in the study of the thermodynamical properties of monomers and dimers (diatomic molecules) in crystals. Fisher, Kasteleyn and Temperley discovered an elegant technique to exactly count the number of perfect matchings in two dimensional lattices, but it is not applicable for matchings of arbitrary size, or in higher dimensional lattices. We present the first efficient approximation algorithm for computing the number of matchings of any size in any periodic lattice in arbitrary dimension. The algorithm is based on Monte Carlo simulation of a suitable Markov chain and has rigorously derived performance guarantees that do not rely on any assumptions. In addition, we show that these results generalize to counting matchings in any graph which is the Cayley graph of a finite group. The second problem is counting self-avoiding walks in lattices. This problem arises in the study of the thermodynamics of long polymer chains in dilute solution. While there are a number of Monte Carlo algorithms used to count self -avoiding walks in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, we present an efficient algorithm which relies on a single, widely-believed conjecture that is simpler than preceding assumptions and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. In either case we know we can trust our results and the algorithm is guaranteed to run in polynomial time. This is the first algorithm for counting self-avoiding walks in which the error bounds are rigorously controlled. This work was supported in part by an AT&T graduate fellowship, a University of California dissertation year fellowship and Esprit working group "RAND". Part of this work was done while visiting ICSI and the University of Edinburgh.

  8. A Method for Counting Moving People in Video Surveillance Videos

    NASA Astrophysics Data System (ADS)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  9. A Group Action Method for Construction of Strong Substitution Box

    NASA Astrophysics Data System (ADS)

    Jamal, Sajjad Shaukat; Shah, Tariq; Attaullah, Atta

    2017-06-01

    In this paper, the method to develop cryptographically strong substitution box is presented which can be used in multimedia security and data hiding techniques. The algorithm of construction depends on the action of a projective general linear group over the set of units of the finite commutative ring. The strength of substitution box and ability to create confusion is assessed with different available analyses. Moreover, the ability of resistance against malicious attacks is also evaluated. The substitution box is examined by bit independent criterion, strict avalanche criterion, nonlinearity test, linear approximation probability test and differential approximation probability test. This substitution box is equated with well-recognized substitution boxes such as AES, Gray, APA, S8, prime of residue, Xyi and Skipjack. The comparison shows encouraging results about the strength of the proposed box. The majority logic criterion is also calculated to analyze the strength and its practical implementation.

  10. Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.

    PubMed

    Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer

    2015-11-01

    Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.

  11. CNV-CH: A Convex Hull Based Segmentation Approach to Detect Copy Number Variations (CNV) Using Next-Generation Sequencing Data

    PubMed Central

    De, Rajat K.

    2015-01-01

    Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision. PMID:26291322

  12. CNV-CH: A Convex Hull Based Segmentation Approach to Detect Copy Number Variations (CNV) Using Next-Generation Sequencing Data.

    PubMed

    Sinha, Rituparna; Samaddar, Sandip; De, Rajat K

    2015-01-01

    Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision.

  13. A Comparison of Automated and Manual Crater Counting Techniques in Images of Elysium Planitia.

    NASA Astrophysics Data System (ADS)

    Plesko, C. S.; Brumby, S. P.; Asphaug, E.

    2004-11-01

    Surveys of impact craters yield a wealth of information about Martian geology, providing clues to the relative age, local composition and erosional history of the surface. Martian craters are also of intrinsic geophysical interest, given that the processes by which they form are not entirely clear, especially cratering in ice-saturated regoliths (Plesko et al. 2004, AGU) which appear common on Mars (Squyres and Carr 1986). However, the deluge of data over the last decade has made comprehensive manual counts prohibitive, except in select regions. Given that most small craters on Mars may be secondaries from a few very recent impact events (McEwen et al. in press, Icarus 2004), using select regions for age dating introduces considerable potential for sampling error. Automation is thus an enabling planetary science technology. In contrast to machine counts, human counts are prone to human decision making, thus not intrinsically reproducible. One can address human "noise" by averaging over many human counts (Kanefsky et al. 2001), but this multiplies the already laborious effort required. In this study, we test automated crater counting algorithms developed with the Los Alamos National Laboratory genetic programming suite GENIE (Harvey et al., 2002) against established manual counts of craters in Elysium Planitia, using MOC and THEMIS data. We intend to establish the validity of our method against well-regarded hand counts (Hartmann et al. 2000), and then apply it generally to larger regions of Mars. Previous work on automated crater counting used customized algorithms (Bierhaus et al. 2003, Burl et al.. 2001). Algorithms generated by genetic programming have the advantage of requiring little time or user effort to generate, so it is relatively easy to generate a suite of algorithms for varied terrain types, or to compare results from multiple algorithms for improved accuracy (Plesko et al. 2003).

  14. Fault detection method for railway wheel flat using an adaptive multiscale morphological filter

    NASA Astrophysics Data System (ADS)

    Li, Yifan; Zuo, Ming J.; Lin, Jianhui; Liu, Jianxin

    2017-02-01

    This study explores the capacity of the morphology analysis for railway wheel flat fault detection. A dynamic model of vehicle systems with 56 degrees of freedom was set up along with a wheel flat model to calculate the dynamic responses of axle box. The vehicle axle box vibration signal is complicated because it not only contains the information of wheel defect, but also includes track condition information. Thus, how to extract the influential features of wheels from strong background noise effectively is a typical key issue for railway wheel fault detection. In this paper, an algorithm for adaptive multiscale morphological filtering (AMMF) was proposed, and its effect was evaluated by a simulated signal. And then this algorithm was employed to study the axle box vibration caused by wheel flats, as well as the influence of track irregularity and vehicle running speed on diagnosis results. Finally, the effectiveness of the proposed method was verified by bench testing. Research results demonstrate that the AMMF extracts the influential characteristic of axle box vibration signals effectively and can diagnose wheel flat faults in real time.

  15. Countability of Planck Boxes in Quantum Branching Models

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    2002-04-01

    Two popular paradigms of cosmological quantum branching are Many World (MW) model of parallel universes (Everett, Deutsch) and inflationary quantum foam (IQF) model (Guth, Linde). Taking Planck L,T units as physically smallest, our Big Bang miniverse with size 10E28 cm and duration 10E18 sec has some 10E244 (N) elementary 4D Planck Boxes (PB) in its entire spacetime history. Using combinatorics, N! (about 10E10E247) is upper estimate for number of all possible 4D states, i.e. scale of "eternal return" (ER; Nietzsche, Eliade) for such miniverses. To count all states in full Megaverse (all up and down branches of infinite tree of all MW and/or IQF miniverses) we recall that all countable infinities have same (aleph-naught) cardinality (Cantor). Using Godel-type numbering, count PB in our miniverse by primes. This uses first N primes. Both MW and IQF models presume splitting of miniverses as springing (potentially) from each PB, making each PB infinitely rich, inexhaustible and unique. Next branching level is counted by integers p1Ep2, third level by p1Ep2Ep3 integers, etc, ad infinitum. To count in up and down directions from "our" miniverse, different branching subsets of powers of primes can be used at all levels of tower exponentiation. Thus, all PB in all infinitude of MW and/or IQF branches can be uniquely counted by never repeating integers (tower exponents of primes), offering escape from grim ER scenarios.

  16. Fractal analysis of MRI data for the characterization of patients with schizophrenia and bipolar disorder.

    PubMed

    Squarcina, Letizia; De Luca, Alberto; Bellani, Marcella; Brambilla, Paolo; Turkheimer, Federico E; Bertoldo, Alessandra

    2015-02-21

    Fractal geometry can be used to analyze shape and patterns in brain images. With this study we use fractals to analyze T1 data of patients affected by schizophrenia or bipolar disorder, with the aim of distinguishing between healthy and pathological brains using the complexity of brain structure, in particular of grey matter, as a marker of disease. 39 healthy volunteers, 25 subjects affected by schizophrenia and 11 patients affected by bipolar disorder underwent an MRI session. We evaluated fractal dimension of the brain cortex and its substructures, calculated with an algorithm based on the box-count algorithm. We modified this algorithm, with the aim of avoiding the segmentation processing step and using all the information stored in the image grey levels. Moreover, to increase sensitivity to local structural changes, we computed a value of fractal dimension for each slice of the brain or of the particular structure. To have reference values in comparing healthy subjects with patients, we built a template by averaging fractal dimension values of the healthy volunteers data. Standard deviation was evaluated and used to create a confidence interval. We also performed a slice by slice t-test to assess the difference at slice level between the three groups. Consistent average fractal dimension values were found across all the structures in healthy controls, while in the pathological groups we found consistent differences, indicating a change in brain and structures complexity induced by these disorders.

  17. Fractal analysis of MRI data for the characterization of patients with schizophrenia and bipolar disorder

    NASA Astrophysics Data System (ADS)

    Squarcina, Letizia; De Luca, Alberto; Bellani, Marcella; Brambilla, Paolo; Turkheimer, Federico E.; Bertoldo, Alessandra

    2015-02-01

    Fractal geometry can be used to analyze shape and patterns in brain images. With this study we use fractals to analyze T1 data of patients affected by schizophrenia or bipolar disorder, with the aim of distinguishing between healthy and pathological brains using the complexity of brain structure, in particular of grey matter, as a marker of disease. 39 healthy volunteers, 25 subjects affected by schizophrenia and 11 patients affected by bipolar disorder underwent an MRI session. We evaluated fractal dimension of the brain cortex and its substructures, calculated with an algorithm based on the box-count algorithm. We modified this algorithm, with the aim of avoiding the segmentation processing step and using all the information stored in the image grey levels. Moreover, to increase sensitivity to local structural changes, we computed a value of fractal dimension for each slice of the brain or of the particular structure. To have reference values in comparing healthy subjects with patients, we built a template by averaging fractal dimension values of the healthy volunteers data. Standard deviation was evaluated and used to create a confidence interval. We also performed a slice by slice t-test to assess the difference at slice level between the three groups. Consistent average fractal dimension values were found across all the structures in healthy controls, while in the pathological groups we found consistent differences, indicating a change in brain and structures complexity induced by these disorders.

  18. Adaptive box filters for removal of random noise from digital images

    USGS Publications Warehouse

    Eliason, E.M.; McEwen, A.S.

    1990-01-01

    We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors

  19. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    NASA Astrophysics Data System (ADS)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  20. Physical activity in low-income postpartum women.

    PubMed

    Wilkinson, Susan; Huang, Chiu-Mieh; Walker, Lorraine O; Sterling, Bobbie Sue; Kim, Minseong

    2004-01-01

    To validate the 7-day physical activity recall (PAR), including alternative PAR scoring algorithms, using pedometer readings with low-income postpartum women, and to describe physical activity patterns of a low-income population of postpartum women. Forty-four women (13 African American, 19 Hispanic, and 12 White) from the Austin New Mothers Study (ANMS) were interviewed at 3 months postpartum. Data were scored alternatively according to the Blair (sitting treated as light activity) and Welk (sitting excluded from light activity and treated as rest) algorithms. Step counts based on 3 days of wearing pedometers served as the validation measure. Using the Welk algorithm, PAR components significantly correlated with step counts were: minutes spent in light activity, total activity (sum of light to very hard activity), and energy expenditure. Minutes spent in sitting were negatively correlated with step counts. No PAR component activities derived with the Blair algorithm were significantly related to step counts. The largest amount of active time was spent in light activity: 384.4 minutes with the Welk algorithm. Mothers averaged fewer than 16 minutes per day in moderate or high intensity activity. Step counts measured by pedometers averaged 6,262 (SD = 2,712) per day. The findings indicate support for the validity of the PAR as a measure of physical activity with low-income postpartum mothers when scored according to the Welk algorithm. On average, low-income postpartum women in this study did not meet recommendations for amount of moderate or high intensity physical activity.

  1. [Spatial distribution pattern and fractal analysis of Larix chinensis populations in Qinling Mountain].

    PubMed

    Guo, Hua; Wang, Xiaoan; Xiao, Yaping

    2005-02-01

    In this paper, the fractal characters of Larix chinensis populations in Qinling Mountain were studied by contiguous grid quadrate sampling method and by boxing-counting dimension and information dimension. The results showed that the high boxing-counting dimension (1.8087) and information dimension (1.7931) reflected a higher spatial occupational degree of L. chinensis populations. Judged by the dispersal index and Morisita's pattern index, L. chinensis populations clumped at three different age stages (0-25, 25-50 and over 50 years). From Greig-Smiths' mean variance analysis, the figure of pattern scale showed that L. chinensis populations clumped in 128 m2 and 512 m2, and the different age groups clumped in different scales. The pattern intensities decreased with increasing age, and tended to reduce with increasing area when detected by Kershaw's PI index. The spatial pattern characters of L. chinensis populations may be their responses to environmental factors.

  2. Automated detection of sperm whale sounds as a function of abrupt changes in sound intensity

    NASA Astrophysics Data System (ADS)

    Walker, Christopher D.; Rayborn, Grayson H.; Brack, Benjamin A.; Kuczaj, Stan A.; Paulos, Robin L.

    2003-04-01

    An algorithm designed to detect abrupt changes in sound intensity was developed and used to identify and count sperm whale vocalizations and to measure boat noise. The algorithm is a MATLAB routine that counts the number of occurrences for which the change in intensity level exceeds a threshold. The algorithm also permits the setting of a ``dead time'' interval to prevent the counting of multiple pulses within a single sperm whale click. This algorithm was used to analyze digitally sampled recordings of ambient noise obtained from the Gulf of Mexico using near bottom mounted EARS buoys deployed as part of the Littoral Acoustic Demonstration Center experiment. Because the background in these data varied slowly, the result of the application of the algorithm was automated detection of sperm whale clicks and creaks with results that agreed well with those obtained by trained human listeners. [Research supported by ONR.

  3. BMPix and PEAK tools: New methods for automated laminae recognition and counting—Application to glacial varves from Antarctic marine sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Pfeiffer, M.; Korff, B.; Thurow, J.; Ricken, W.

    2010-03-01

    We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray scale curves from images at pixel resolution. The PEAK tool uses the gray scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. The algorithms are available at doi:10.1594/PANGAEA.729700. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  4. A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, Fred T.

    1992-01-01

    A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.

  5. Morphological spot counting from stacked images for automated analysis of gene copy numbers by fluorescence in situ hybridization.

    PubMed

    Grigoryan, Artyom M; Dougherty, Edward R; Kononen, Juha; Bubendorf, Lukas; Hostetter, Galen; Kallioniemi, Olli

    2002-01-01

    Fluorescence in situ hybridization (FISH) is a molecular diagnostic technique in which a fluorescent labeled probe hybridizes to a target nucleotide sequence of deoxyribose nucleic acid. Upon excitation, each chromosome containing the target sequence produces a fluorescent signal (spot). Because fluorescent spot counting is tedious and often subjective, automated digital algorithms to count spots are desirable. New technology provides a stack of images on multiple focal planes throughout a tissue sample. Multiple-focal-plane imaging helps overcome the biases and imprecision inherent in single-focal-plane methods. This paper proposes an algorithm for global spot counting in stacked three-dimensional slice FISH images without the necessity of nuclei segmentation. It is designed to work in complex backgrounds, when there are agglomerated nuclei, and in the presence of illumination gradients. It is based on the morphological top-hat transform, which locates intensity spikes on irregular backgrounds. After finding signals in the slice images, the algorithm groups these together to form three-dimensional spots. Filters are employed to separate legitimate spots from fluorescent noise. The algorithm is set in a comprehensive toolbox that provides visualization and analytic facilities. It includes simulation software that allows examination of algorithm performance for various image and algorithm parameter settings, including signal size, signal density, and the number of slices.

  6. Optical image acquisition system for colony analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Jin, Wenbiao

    2006-02-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable, all the camera functions are controlled by the computer.

  7. Trackside acoustic diagnosis of axle box bearing based on kurtosis-optimization wavelet denoising

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2018-04-01

    As one of the key components of railway vehicles, the operation condition of the axle box bearing has a significant effect on traffic safety. The acoustic diagnosis is more suitable than vibration diagnosis for trackside monitoring. The acoustic signal generated by the train axle box bearing is an amplitude modulation and frequency modulation signal with complex train running noise. Although empirical mode decomposition (EMD) and some improved time-frequency algorithms have proved to be useful in bearing vibration signal processing, it is hard to extract the bearing fault signal from serious trackside acoustic background noises by using those algorithms. Therefore, a kurtosis-optimization-based wavelet packet (KWP) denoising algorithm is proposed, as the kurtosis is the key indicator of bearing fault signal in time domain. Firstly, the geometry based Doppler correction is applied to signals of each sensor, and with the signal superposition of multiple sensors, random noises and impulse noises, which are the interference of the kurtosis indicator, are suppressed. Then, the KWP is conducted. At last, the EMD and Hilbert transform is applied to extract the fault feature. Experiment results indicate that the proposed method consisting of KWP and EMD is superior to the EMD.

  8. Computing row and column counts for sparse QR and LU factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, John R.; Li, Xiaoye S.; Ng, Esmond G.

    2001-01-01

    We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization. The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and theremore » is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization. These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.« less

  9. A noise resistant symmetric key cryptosystem based on S8 S-boxes and chaotic maps

    NASA Astrophysics Data System (ADS)

    Hussain, Iqtadar; Anees, Amir; Aslam, Muhammad; Ahmed, Rehan; Siddiqui, Nasir

    2018-04-01

    In this manuscript, we have proposed an encryption algorithm to encrypt any digital data. The proposed algorithm is primarily based on the substitution-permutation in which the substitution process is performed by the S 8 Substitution boxes. The proposed algorithm incorporates three different chaotic maps. We have analysed the behaviour of chaos by secure communication in great length, and accordingly, we have applied those chaotic sequences in the proposed encryption algorithm. The simulation and statistical results revealed that the proposed encryption scheme is secure against different attacks. Moreover, the encryption scheme can tolerate the channel noise as well; if the encrypted data is corrupted by the unauthenticated user or by the channel noise, the decryption can still be successfully done with some distortion. The overall results confirmed that the presented work has good cryptographic features, low computational complexity and resistant to the channel noise which makes it suitable for low profile mobile applications.

  10. Advances in the theory of box integrals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.; Borwein, J.M.; Crandall, R.E.

    2009-06-25

    Box integrals - expectations <|{rvec r}|{sup s}> or <|{rvec r}-{rvec q}|{sup s}> over the unit n-cube (or n-box) - have over three decades been occasionally given closed forms for isolated n,s. By employing experimental mathematics together with a new, global analytic strategy, we prove that for n {le} 4 dimensions the box integrals are for any integer s hypergeometrically closed in a sense we clarify herein. For n = 5 dimensions, we show that a single unresolved integral we call K{sub 5} stands in the way of such hyperclosure proofs. We supply a compendium of exemplary closed forms that naturallymore » arise algorithmically from this theory.« less

  11. Passenger Flow Estimation and Characteristics Expansion

    DOT National Transportation Integrated Search

    2016-04-01

    Mark R. McCord (ORCID ID 0000-0002-6293-3143) The objectives of this study are to investigate the estimation of bus passenger boarding-to-alighting (B2A) flows using Automatic Passenger Count (APC) and Automatic Fare Collection (AFC) (fare-box) data ...

  12. Performance in population models for count data, part II: a new SAEM algorithm

    PubMed Central

    Savic, Radojka; Lavielle, Marc

    2009-01-01

    Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795

  13. Power Scaling of the Mainland Shoreline of the Atlantic Coast of the United States

    NASA Astrophysics Data System (ADS)

    Vasko, E.; Barton, C. C.; Geise, G. R.; Rizki, M. M.

    2017-12-01

    The fractal dimension of the mainland shoreline of the Atlantic coast of the United Stated from Maine to Homestead, FL has been measured in 1000 km increments using the box-counting method. The shoreline analyzed is the NOAA Medium Resolution Shoreline (https://shoreline.noaa.gov/data/datasheets/medres.html). The shoreline was reconstituted into sequentially numbered X-Y coordinate points in UTM Zone 18N which are spaced 50 meters apart, as measured continuously along the shoreline. We created a MATLAB computer code to measure the fractal dimension by box counting while "walking" along the shoreline. The range of box sizes is 0.7 to 450 km. The fractal dimension ranges from 1.0 to1.5 along the mainland shoreline of the Atlantic coast. The fractal dimension is compared with beach particle sizes (bedrock outcrop, cobbles, pebbles, sand, clay), tidal range, rate of sea level rise, rate and direction of vertical crustal movement, and wave energy, looking for correlation with the measured fractal dimensions. The results show a correlation between high fractal dimensions (1.3 - 1.4) and tectonically emergent coasts, and low fractal dimensions (1.0 - 1.2) along submergent and stable coastal regions. Fractal dimension averages 1.3 along shorelines with shoreline protection structures such as seawalls, jetties, and groins.

  14. Iterative reconstruction of simulated low count data: a comparison of post-filtering versus regularised OSEM

    NASA Astrophysics Data System (ADS)

    Karaoglanis, K.; Efthimiou, N.; Tsoumpas, C.

    2015-09-01

    Low count PET data is a challenge for medical image reconstruction. The statistics of a dataset is a key factor of the quality of the reconstructed images. Reconstruction algorithms which would be able to compensate for low count datasets could provide the means to reduce the patient injected doses and/or reduce the scan times. It has been shown that the use of priors improve the image quality in low count conditions. In this study we compared regularised versus post-filtered OSEM for their performance on challenging simulated low count datasets. Initial visual comparison demonstrated that both algorithms improve the image quality, although the use of regularization does not introduce the undesired blurring as post-filtering.

  15. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Hanson, Bradley D; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-07-28

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images.

  16. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Hanson, Bradley D.; Barber, Andrew; Freitas, Amy; Robles, Daniel; Whelan, Erin

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a three-step image analysis process based upon: (1) an orthographic plant projection method integrating a perspective transform with automatic parameter estimation; (2) a plant counting method based on projection histograms; and (3) a double-counting avoidance method based on a homography transform. Experimental results demonstrate the ability to count large numbers of plants automatically with no human effort. Results show that, for tree seedlings having a height up to 40 cm and a within-row tree spacing of approximately 10 cm, the algorithms successfully estimated the number of plants with an average accuracy of 95.2% for trees within a single image and 98% for counting of the whole plant population in a large sequence of images. PMID:26225982

  17. A real-time phoneme counting algorithm and application for speech rate monitoring.

    PubMed

    Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava

    2017-03-01

    Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Box Office Forecasting considering Competitive Environment and Word-of-Mouth in Social Networks: A Case Study of Korean Film Market.

    PubMed

    Kim, Taegu; Hong, Jungsik; Kang, Pilsung

    2017-01-01

    Accurate box office forecasting models are developed by considering competition and word-of-mouth (WOM) effects in addition to screening-related information. Nationality, genre, ratings, and distributors of motion pictures running concurrently with the target motion picture are used to describe the competition, whereas the numbers of informative, positive, and negative mentions posted on social network services (SNS) are used to gauge the atmosphere spread by WOM. Among these candidate variables, only significant variables are selected by genetic algorithm (GA), based on which machine learning algorithms are trained to build forecasting models. The forecasts are combined to improve forecasting performance. Experimental results on the Korean film market show that the forecasting accuracy in early screening periods can be significantly improved by considering competition. In addition, WOM has a stronger influence on total box office forecasting. Considering both competition and WOM improves forecasting performance to a larger extent than when only one of them is considered.

  19. Box Office Forecasting considering Competitive Environment and Word-of-Mouth in Social Networks: A Case Study of Korean Film Market

    PubMed Central

    Kim, Taegu; Hong, Jungsik

    2017-01-01

    Accurate box office forecasting models are developed by considering competition and word-of-mouth (WOM) effects in addition to screening-related information. Nationality, genre, ratings, and distributors of motion pictures running concurrently with the target motion picture are used to describe the competition, whereas the numbers of informative, positive, and negative mentions posted on social network services (SNS) are used to gauge the atmosphere spread by WOM. Among these candidate variables, only significant variables are selected by genetic algorithm (GA), based on which machine learning algorithms are trained to build forecasting models. The forecasts are combined to improve forecasting performance. Experimental results on the Korean film market show that the forecasting accuracy in early screening periods can be significantly improved by considering competition. In addition, WOM has a stronger influence on total box office forecasting. Considering both competition and WOM improves forecasting performance to a larger extent than when only one of them is considered. PMID:28819355

  20. Predicting CD4 count changes among patients on antiretroviral treatment: Application of data mining techniques.

    PubMed

    Kebede, Mihiretu; Zegeye, Desalegn Tigabu; Zeleke, Berihun Megabiaw

    2017-12-01

    To monitor the progress of therapy and disease progression, periodic CD4 counts are required throughout the course of HIV/AIDS care and support. The demand for CD4 count measurement is increasing as ART programs expand over the last decade. This study aimed to predict CD4 count changes and to identify the predictors of CD4 count changes among patients on ART. A cross-sectional study was conducted at the University of Gondar Hospital from 3,104 adult patients on ART with CD4 counts measured at least twice (baseline and most recent). Data were retrieved from the HIV care clinic electronic database and patients` charts. Descriptive data were analyzed by SPSS version 20. Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology was followed to undertake the study. WEKA version 3.8 was used to conduct a predictive data mining. Before building the predictive data mining models, information gain values and correlation-based Feature Selection methods were used for attribute selection. Variables were ranked according to their relevance based on their information gain values. J48, Neural Network, and Random Forest algorithms were experimented to assess model accuracies. The median duration of ART was 191.5 weeks. The mean CD4 count change was 243 (SD 191.14) cells per microliter. Overall, 2427 (78.2%) patients had their CD4 counts increased by at least 100 cells per microliter, while 4% had a decline from the baseline CD4 value. Baseline variables including age, educational status, CD8 count, ART regimen, and hemoglobin levels predicted CD4 count changes with predictive accuracies of J48, Neural Network, and Random Forest being 87.1%, 83.5%, and 99.8%, respectively. Random Forest algorithm had a superior performance accuracy level than both J48 and Artificial Neural Network. The precision, sensitivity and recall values of Random Forest were also more than 99%. Nearly accurate prediction results were obtained using Random Forest algorithm. This algorithm could be used in a low-resource setting to build a web-based prediction model for CD4 count changes. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Pulse shape discrimination of Cs2LiYCl6:Ce3+ detectors at high count rate based on triangular and trapezoidal filters

    NASA Astrophysics Data System (ADS)

    Wen, Xianfei; Enqvist, Andreas

    2017-09-01

    Cs2LiYCl6:Ce3+ (CLYC) detectors have demonstrated the capability to simultaneously detect γ-rays and thermal and fast neutrons with medium energy resolution, reasonable detection efficiency, and substantially high pulse shape discrimination performance. A disadvantage of CLYC detectors is the long scintillation decay times, which causes pulse pile-up at moderate input count rate. Pulse processing algorithms were developed based on triangular and trapezoidal filters to discriminate between neutrons and γ-rays at high count rate. The algorithms were first tested using low-rate data. They exhibit a pulse-shape discrimination performance comparable to that of the charge comparison method, at low rate. Then, they were evaluated at high count rate. Neutrons and γ-rays were adequately identified with high throughput at rates of up to 375 kcps. The algorithm developed using the triangular filter exhibits discrimination capability marginally higher than that of the trapezoidal filter based algorithm irrespective of low or high rate. The algorithms exhibit low computational complexity and are executable on an FPGA in real-time. They are also suitable for application to other radiation detectors whose pulses are piled-up at high rate owing to long scintillation decay times.

  2. An algorithm for determining the rotation count of pulsars

    NASA Astrophysics Data System (ADS)

    Freire, Paulo C. C.; Ridolfi, Alessandro

    2018-06-01

    We present here a simple, systematic method for determining the correct global rotation count of a radio pulsar; an essential step for the derivation of an accurate phase-coherent ephemeris. We then build on this method by developing a new algorithm for determining the global rotational count for pulsars with sparse timing data sets. This makes it possible to obtain phase-coherent ephemerides for pulsars for which this has been impossible until now. As an example, we do this for PSR J0024-7205aa, an extremely faint Millisecond pulsar (MSP) recently discovered in the globular cluster 47 Tucanae. This algorithm has the potential to significantly reduce the number of observations and the amount of telescope time needed to follow up on new pulsar discoveries.

  3. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption.

    PubMed

    Chandrasekaran, Jeyamala; Thiruvengadam, S J

    2015-01-01

    Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.

  4. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption

    PubMed Central

    Chandrasekaran, Jeyamala; Thiruvengadam, S. J.

    2015-01-01

    Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. PMID:26550603

  5. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  6. A 3D Split Manufacturing Approach to Trustworthy System Development

    DTIC Science & Technology

    2012-12-01

    addition of any cryptographic algorithm or implementation to be included in the system as a foundry-level option. Essentially, 3D security introduces...8192 bytes). We modeled our cryptographic process after the AES algorithm , which can occupy up to 4640 bytes with an enlarged T-Box implementation [4...Reconfigurable Systems and Algorithms (ERSA), Las Vegas, NV, July 2011. [10] Intelligence Advanced Research Projects Agency (IARPA). Trusted integrated

  7. An improved multi-domain convolution tracking algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Xin; Wang, Haiying; Zeng, Yingsen

    2018-04-01

    Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.

  8. Analysis of the COS/NUV Extraction Box Heights

    NASA Astrophysics Data System (ADS)

    Synder, Elaine M.; Sonnentrucker, Paule

    2017-08-01

    We present a diagnostic test of the extraction box heights (EBHs) used when extracting NUV spectroscopic data. This study was motivated by a discrepancy between the EBH used by the COS Exposure Time Calculator during observation planning (8 pixels) and the COS calibration pipeline (CALCOS) during the creation of the final spectra (57 pixels). In this Instrument Science Report, we study the effects of decreasing the EBH on the net counts and signal-to-noise ratio for CALCOS-reduced spectra of many different targets. We also provide detailed instructions for users who wish to perform a custom extraction for their data

  9. THE USE OF BOX MODELS TO DESCRIBE THE PERSONAL CLOUD EFFECT ON HUMAN EXPOSURE TO PARTICULATE MATTER

    EPA Science Inventory

    An algorithm has been developed to describe particle transport into and out of the breathing zone in an effort to predict the effects of the personal cloud phenomenon (Eisner and Heist, 2000). The algorithm was developed based on the principle of mass balance between a system ...

  10. Multi-strategy coevolving aging particle optimization.

    PubMed

    Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante

    2014-02-01

    We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.

  11. A high dynamic range pulse counting detection system for mass spectrometry.

    PubMed

    Collings, Bruce A; Dima, Martian D; Ivosev, Gordana; Zhong, Feng

    2014-01-30

    A high dynamic range pulse counting system has been developed that demonstrates an ability to operate at up to 2e8 counts per second (cps) on a triple quadrupole mass spectrometer. Previous pulse counting detection systems have typically been limited to about 1e7 cps at the upper end of the systems dynamic range. Modifications to the detection electronics and dead time correction algorithm are described in this paper. A high gain transimpedance amplifier is employed that allows a multi-channel electron multiplier to be operated at a significantly lower bias potential than in previous pulse counting systems. The system utilises a high-energy conversion dynode, a multi-channel electron multiplier, a high gain transimpedance amplifier, non-paralysing detection electronics and a modified dead time correction algorithm. Modification of the dead time correction algorithm is necessary due to a characteristic of the pulse counting electronics. A pulse counting detection system with the capability to count at ion arrival rates of up to 2e8 cps is described. This is shown to provide a linear dynamic range of nearly five orders of magnitude for a sample of aprazolam with concentrations ranging from 0.0006970 ng/mL to 3333 ng/mL while monitoring the m/z 309.1 → m/z 205.2 transition. This represents an upward extension of the detector's linear dynamic range of about two orders of magnitude. A new high dynamic range pulse counting system has been developed demonstrating the ability to operate at up to 2e8 cps on a triple quadrupole mass spectrometer. This provides an upward extension of the detector's linear dynamic range by about two orders of magnitude over previous pulse counting systems. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  13. Detection and Counting of Orchard Trees from Vhr Images Using a Geometrical-Optical Model and Marked Template Matching

    NASA Astrophysics Data System (ADS)

    Maillard, Philippe; Gomes, Marília F.

    2016-06-01

    This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "template matching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have < 1 meter spatial resolution and were downloaded from the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.

  14. Array-based infra-red detection: an enabling technology for people counting, sensing, tracking, and intelligent detection

    NASA Astrophysics Data System (ADS)

    Stogdale, Nick; Hollock, Steve; Johnson, Neil; Sumpter, Neil

    2003-09-01

    A 16x16 element un-cooled pyroelectric detector array has been developed which, when allied with advanced tracking and detection algorithms, has created a universal detector with multiple applications. Low-cost manufacturing techniques are used to fabricate a hybrid detector, intended for economic use in commercial markets. The detector has found extensive application in accurate people counting, detection, tracking, secure area protection, directional sensing and area violation; topics which are all pertinent to the provision of Homeland Security. The detection and tracking algorithms have, when allied with interpolation techniques, allowed a performance much higher than might be expected from a 16x16 array. This paper reviews the technology, with particular attention to the array structure, algorithms and interpolation techniques and outlines its application in a number of challenging market areas. Viewed from above, moving people are seen as 'hot blobs' moving through the field of view of the detector; background clutter or stationary objects are not seen and the detector works irrespective of lighting or environmental conditions. Advanced algorithms detect the people and extract size, shape, direction and velocity vectors allowing the number of people to be detected and their trajectories of motion to be tracked. Provision of virtual lines in the scene allows bi-directional counting of people flowing in and out of an entrance or area. Definition of a virtual closed area in the scene allows counting of the presence of stationary people within a defined area. Definition of 'counting lines' allows the counting of people, the ability to augment access control devices by confirming a 'one swipe one entry' judgement and analysis of the flow and destination of moving people. For example, passing the 'wrong way' up a denied passageway can be detected. Counting stationary people within a 'defined area' allows the behaviour and size of groups of stationary people to be analysed and counted, an alarm condition can also be generated when people stray into such areas.

  15. Vehicle Detection with Occlusion Handling, Tracking, and OC-SVM Classification: A High Performance Vision-Based System

    PubMed Central

    Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael

    2018-01-01

    This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078

  16. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    PubMed

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  17. Impact of non-specific normal databases on perfusion quantification of low-dose myocardial SPECT studies.

    PubMed

    Scabbio, Camilla; Zoccarato, Orazio; Malaspina, Simona; Lucignani, Giovanni; Del Sole, Angelo; Lecchi, Michela

    2017-10-17

    To evaluate the impact of non-specific normal databases on the percent summed rest score (SR%) and stress score (SS%) from simulated low-dose SPECT studies by shortening the acquisition time/projection. Forty normal-weight and 40 overweight/obese patients underwent myocardial studies with a conventional gamma-camera (BrightView, Philips) using three different acquisition times/projection: 30, 15, and 8 s (100%-counts, 50%-counts, and 25%-counts scan, respectively) and reconstructed using the iterative algorithm with resolution recovery (IRR) Astonish TM (Philips). Three sets of normal databases were used: (1) full-counts IRR; (2) half-counts IRR; and (3) full-counts traditional reconstruction algorithm database (TRAD). The impact of these databases and the acquired count statistics on the SR% and SS% was assessed by ANOVA analysis and Tukey test (P < 0.05). Significantly higher SR% and SS% values (> 40%) were found for the full-counts TRAD databases respect to the IRR databases. For overweight/obese patients, significantly higher SS% values for 25%-counts scans (+19%) are confirmed compared to those of 50%-counts scan, independently of using the half-counts or the full-counts IRR databases. Astonish TM requires the adoption of the own specific normal databases in order to prevent very high overestimation of both stress and rest perfusion scores. Conversely, the count statistics of the normal databases seems not to influence the quantification scores.

  18. Evaluating remotely sensed plant count accuracy with differing unmanned aircraft system altitudes, physical canopy separations, and ground covers

    NASA Astrophysics Data System (ADS)

    Leiva, Josue Nahun; Robbins, James; Saraswat, Dharmendra; She, Ying; Ehsani, Reza

    2017-07-01

    This study evaluated the effect of flight altitude and canopy separation of container-grown Fire Chief™ arborvitae (Thuja occidentalis L.) on counting accuracy. Images were taken at 6, 12, and 22 m above the ground using unmanned aircraft systems. Plants were spaced to achieve three canopy separation treatments: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. Plants were placed on two different ground covers: black fabric and gravel. A counting algorithm was trained using Feature Analyst®. Total counting error, false positives, and unidentified plants were reported for images analyzed. In general, total counting error was smaller when plants were fully separated. The effect of ground cover on counting accuracy varied with the counting algorithm. Total counting error for plants placed on gravel (-8) was larger than for those on a black fabric (-2), however, false positive counts were similar for black fabric (6) and gravel (6). Nevertheless, output images of plants placed on gravel did not show a negative effect due to the ground cover but was impacted by differences in image spatial resolution.

  19. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    PubMed

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  20. Improving the estimation of mealtime insulin dose in adults with type 1 diabetes: the Normal Insulin Demand for Dose Adjustment (NIDDA) study.

    PubMed

    Bao, Jiansong; Gilbertson, Heather R; Gray, Robyn; Munns, Diane; Howard, Gabrielle; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C

    2011-10-01

    Although carbohydrate counting is routine practice in type 1 diabetes, hyperglycemic episodes are common. A food insulin index (FII) has been developed and validated for predicting the normal insulin demand generated by mixed meals in healthy adults. We sought to compare a novel algorithm on the basis of the FII for estimating mealtime insulin dose with carbohydrate counting in adults with type 1 diabetes. A total of 28 patients using insulin pump therapy consumed two different breakfast meals of equal energy, glycemic index, fiber, and calculated insulin demand (both FII = 60) but approximately twofold difference in carbohydrate content, in random order on three consecutive mornings. On one occasion, a carbohydrate-counting algorithm was applied to meal A (75 g carbohydrate) for determining bolus insulin dose. On the other two occasions, carbohydrate counting (about half the insulin dose as meal A) and the FII algorithm (same dose as meal A) were applied to meal B (41 g carbohydrate). A real-time continuous glucose monitor was used to assess 3-h postprandial glycemia. Compared with carbohydrate counting, the FII algorithm significantly decreased glucose incremental area under the curve over 3 h (-52%, P = 0.013) and peak glucose excursion (-41%, P = 0.01) and improved the percentage of time within the normal blood glucose range (4-10 mmol/L) (31%, P = 0.001). There was no significant difference in the occurrence of hypoglycemia. An insulin algorithm based on physiological insulin demand evoked by foods in healthy subjects may be a useful tool for estimating mealtime insulin dose in patients with type 1 diabetes.

  1. Improving the Estimation of Mealtime Insulin Dose in Adults With Type 1 Diabetes

    PubMed Central

    Bao, Jiansong; Gilbertson, Heather R.; Gray, Robyn; Munns, Diane; Howard, Gabrielle; Petocz, Peter; Colagiuri, Stephen; Brand-Miller, Jennie C.

    2011-01-01

    OBJECTIVE Although carbohydrate counting is routine practice in type 1 diabetes, hyperglycemic episodes are common. A food insulin index (FII) has been developed and validated for predicting the normal insulin demand generated by mixed meals in healthy adults. We sought to compare a novel algorithm on the basis of the FII for estimating mealtime insulin dose with carbohydrate counting in adults with type 1 diabetes. RESEARCH DESIGN AND METHODS A total of 28 patients using insulin pump therapy consumed two different breakfast meals of equal energy, glycemic index, fiber, and calculated insulin demand (both FII = 60) but approximately twofold difference in carbohydrate content, in random order on three consecutive mornings. On one occasion, a carbohydrate-counting algorithm was applied to meal A (75 g carbohydrate) for determining bolus insulin dose. On the other two occasions, carbohydrate counting (about half the insulin dose as meal A) and the FII algorithm (same dose as meal A) were applied to meal B (41 g carbohydrate). A real-time continuous glucose monitor was used to assess 3-h postprandial glycemia. RESULTS Compared with carbohydrate counting, the FII algorithm significantly decreased glucose incremental area under the curve over 3 h (–52%, P = 0.013) and peak glucose excursion (–41%, P = 0.01) and improved the percentage of time within the normal blood glucose range (4–10 mmol/L) (31%, P = 0.001). There was no significant difference in the occurrence of hypoglycemia. CONCLUSIONS An insulin algorithm based on physiological insulin demand evoked by foods in healthy subjects may be a useful tool for estimating mealtime insulin dose in patients with type 1 diabetes. PMID:21949219

  2. Multivariate calibration in Laser-Induced Breakdown Spectroscopy quantitative analysis: The dangers of a 'black box' approach and how to avoid them

    NASA Astrophysics Data System (ADS)

    Safi, A.; Campanella, B.; Grifoni, E.; Legnaioli, S.; Lorenzetti, G.; Pagnotta, S.; Poggialini, F.; Ripoll-Seguer, L.; Hidalgo, M.; Palleschi, V.

    2018-06-01

    The introduction of multivariate calibration curve approach in Laser-Induced Breakdown Spectroscopy (LIBS) quantitative analysis has led to a general improvement of the LIBS analytical performances, since a multivariate approach allows to exploit the redundancy of elemental information that are typically present in a LIBS spectrum. Software packages implementing multivariate methods are available in the most diffused commercial and open source analytical programs; in most of the cases, the multivariate algorithms are robust against noise and operate in unsupervised mode. The reverse of the coin of the availability and ease of use of such packages is the (perceived) difficulty in assessing the reliability of the results obtained which often leads to the consideration of the multivariate algorithms as 'black boxes' whose inner mechanism is supposed to remain hidden to the user. In this paper, we will discuss the dangers of a 'black box' approach in LIBS multivariate analysis, and will discuss how to overcome them using the chemical-physical knowledge that is at the base of any LIBS quantitative analysis.

  3. Three-Dimensional Registration for Handheld Profiling Systems Based on Multiple Shot Structured Light

    PubMed Central

    Ayaz, Shirazi Muhammad; Kim, Min Young

    2018-01-01

    In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552

  4. The BMPix and PEAK Tools: New Methods for Automated Laminae Recognition and Counting - Application to Glacial Varves From Antarctic Marine Sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.

    2009-12-01

    We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  5. Optimizing Controlling-Value-Based Power Gating with Gate Count and Switching Activity

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kimura, Shinji

    In this paper, a new heuristic algorithm is proposed to optimize the power domain clustering in controlling-value-based (CV-based) power gating technology. In this algorithm, both the switching activity of sleep signals (p) and the overall numbers of sleep gates (gate count, N) are considered, and the sum of the product of p and N is optimized. The algorithm effectively exerts the total power reduction obtained from the CV-based power gating. Even when the maximum depth is kept to be the same, the proposed algorithm can still achieve power reduction approximately 10% more than that of the prior algorithms. Furthermore, detailed comparison between the proposed heuristic algorithm and other possible heuristic algorithms are also presented. HSPICE simulation results show that over 26% of total power reduction can be obtained by using the new heuristic algorithm. In addition, the effect of dynamic power reduction through the CV-based power gating method and the delay overhead caused by the switching of sleep transistors are also shown in this paper.

  6. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    PubMed

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-07

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  7. Performance characterization of image and video analysis systems at Siemens Corporate Research

    NASA Astrophysics Data System (ADS)

    Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael

    2000-06-01

    There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.

  8. Optimization of heterogeneous Bin packing using adaptive genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sridhar, R.; Chandrasekaran, M.; Sriramya, C.; Page, Tom

    2017-03-01

    This research is concentrates on a very interesting work, the bin packing using hybrid genetic approach. The optimal and feasible packing of goods for transportation and distribution to various locations by satisfying the practical constraints are the key points in this project work. As the number of boxes for packing can not be predicted in advance and the boxes may not be of same category always. It also involves many practical constraints that are why the optimal packing makes much importance to the industries. This work presents a combinational of heuristic Genetic Algorithm (HGA) for solving Three Dimensional (3D) Single container arbitrary sized rectangular prismatic bin packing optimization problem by considering most of the practical constraints facing in logistic industries. This goal was achieved in this research by optimizing the empty volume inside the container using genetic approach. Feasible packing pattern was achieved by satisfying various practical constraints like box orientation, stack priority, container stability, weight constraint, overlapping constraint, shipment placement constraint. 3D bin packing problem consists of ‘n’ number of boxes being to be packed in to a container of standard dimension in such a way to maximize the volume utilization and in-turn profit. Furthermore, Boxes to be packed may be of arbitrary sizes. The user input data are the number of bins, its size, shape, weight, and constraints if any along with standard container dimension. This user input were stored in the database and encoded to string (chromosomes) format which were normally acceptable by GA. GA operators were allowed to act over these encoded strings for finding the best solution.

  9. Long-term fluctuations in circalunar Beach aggregations of the box jellyfish Alatina moseri in Hawaii, with links to environmental variability.

    PubMed

    Chiaverano, Luciano M; Holland, Brenden S; Crow, Gerald L; Blair, Landy; Yanagihara, Angel A

    2013-01-01

    The box jellyfish Alatina moseri forms monthly aggregations at Waikiki Beach 8-12 days after each full moon, posing a recurrent hazard to swimmers due to painful stings. We present an analysis of long-term (14 years: Jan 1998- Dec 2011) changes in box jellyfish abundance at Waikiki Beach. We tested the relationship of beach counts to climate and biogeochemical variables over time in the North Pacific Sub-tropical Gyre (NPSG). Generalized Additive Models (GAM), Change-Point Analysis (CPA), and General Regression Models (GRM) were used to characterize patterns in box jellyfish arrival at Waikiki Beach 8-12 days following 173 consecutive full moons. Variation in box jellyfish abundance lacked seasonality, but exhibited dramatic differences among months and among years, and followed an oscillating pattern with significant periods of increase (1998-2001; 2006-2011) and decrease (2001-2006). Of three climatic and 12 biogeochemical variables examined, box jellyfish showed a strong, positive relationship with primary production, >2 mm zooplankton biomass, and the North Pacific Gyre Oscillation (NPGO) index. It is clear that that the moon cycle plays a key role in synchronizing timing of the arrival of Alatina moseri medusae to shore. We propose that bottom-up processes, likely initiated by inter-annual regional climatic fluctuations influence primary production, secondary production, and ultimately regulate food availability, and are therefore important in controlling the inter-annual changes in box jellyfish abundance observed at Waikiki Beach.

  10. Long-Term Fluctuations in Circalunar Beach Aggregations of the Box Jellyfish Alatina moseri in Hawaii, with Links to Environmental Variability

    PubMed Central

    Chiaverano, Luciano M.; Holland, Brenden S.; Crow, Gerald L.; Blair, Landy; Yanagihara, Angel A.

    2013-01-01

    The box jellyfish Alatina moseri forms monthly aggregations at Waikiki Beach 8–12 days after each full moon, posing a recurrent hazard to swimmers due to painful stings. We present an analysis of long-term (14 years: Jan 1998– Dec 2011) changes in box jellyfish abundance at Waikiki Beach. We tested the relationship of beach counts to climate and biogeochemical variables over time in the North Pacific Sub-tropical Gyre (NPSG). Generalized Additive Models (GAM), Change-Point Analysis (CPA), and General Regression Models (GRM) were used to characterize patterns in box jellyfish arrival at Waikiki Beach 8–12 days following 173 consecutive full moons. Variation in box jellyfish abundance lacked seasonality, but exhibited dramatic differences among months and among years, and followed an oscillating pattern with significant periods of increase (1998–2001; 2006–2011) and decrease (2001–2006). Of three climatic and 12 biogeochemical variables examined, box jellyfish showed a strong, positive relationship with primary production, >2 mm zooplankton biomass, and the North Pacific Gyre Oscillation (NPGO) index. It is clear that that the moon cycle plays a key role in synchronizing timing of the arrival of Alatina moseri medusae to shore. We propose that bottom-up processes, likely initiated by inter-annual regional climatic fluctuations influence primary production, secondary production, and ultimately regulate food availability, and are therefore important in controlling the inter-annual changes in box jellyfish abundance observed at Waikiki Beach. PMID:24194856

  11. Systematic wavelength selection for improved multivariate spectral analysis

    DOEpatents

    Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.

    1995-01-01

    Methods and apparatus for determining in a biological material one or more unknown values of at least one known characteristic (e.g. the concentration of an analyte such as glucose in blood or the concentration of one or more blood gas parameters) with a model based on a set of samples with known values of the known characteristics and a multivariate algorithm using several wavelength subsets. The method includes selecting multiple wavelength subsets, from the electromagnetic spectral region appropriate for determining the known characteristic, for use by an algorithm wherein the selection of wavelength subsets improves the model's fitness of the determination for the unknown values of the known characteristic. The selection process utilizes multivariate search methods that select both predictive and synergistic wavelengths within the range of wavelengths utilized. The fitness of the wavelength subsets is determined by the fitness function F=.function.(cost, performance). The method includes the steps of: (1) using one or more applications of a genetic algorithm to produce one or more count spectra, with multiple count spectra then combined to produce a combined count spectrum; (2) smoothing the count spectrum; (3) selecting a threshold count from a count spectrum to select these wavelength subsets which optimize the fitness function; and (4) eliminating a portion of the selected wavelength subsets. The determination of the unknown values can be made: (1) noninvasively and in vivo; (2) invasively and in vivo; or (3) in vitro.

  12. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  13. The suppression effect of a periodic surface with semicircular grooves on the high power microwave long pill-box window multipactor phenomenon

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Wang, Yong; Fan, Junjie; Zhong, Yong; Zhang, Rui

    2014-09-01

    To improve the transmitting power in an S-band klystron, a long pill-box window that has a disk with grooves with a semicircular cross section is theoretically investigated and simulated. A Monte-Carlo algorithm is used to track the secondary electron trajectories and analyze the multipactor scenario in the long pill-box window and on the grooved surface. Extending the height of the long-box window can decrease the normal electric field on the surface of the window disk, but the single surface multipactor still exists. It is confirmed that the window disk with periodic semicircular grooves can explicitly suppress the multipactor and predominantly depresses the local field enhancement and the bottom continuous multipactor. The difference between semicircular and sharp boundary grooves is clarified numerically and analytically.

  14. Dismantling of the PETRA glove box: tritium contamination and inventory assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, R.

    2015-03-15

    The PETRA facility is the first installation in which experiments with tritium were carried out at the Tritium Laboratory Karlsruhe. After completion of two main experimental programs, the decommissioning of PETRA was initiated with the aim to reuse the glove box and its main still valuable components. A decommissioning plan was engaged to: -) identify the source of tritium release in the glove box, -) clarify the status of the main components, -) assess residual tritium inventories, and -) de-tritiate the components to be disposed of as waste. Several analytical techniques - calorimetry on small solid samples, wipe test followedmore » by liquid scintillation counting for surface contamination assessment, gas chromatography on gaseous samples - were deployed and cross-checked to assess the remaining tritium inventories and initiate the decommissioning process. The methodology and the main outcomes of the numerous different tritium measurements are presented and discussed. (authors)« less

  15. Repeatability of paired counts.

    PubMed

    Alexander, Neal; Bethony, Jeff; Corrêa-Oliveira, Rodrigo; Rodrigues, Laura C; Hotez, Peter; Brooker, Simon

    2007-08-30

    The Bland and Altman technique is widely used to assess the variation between replicates of a method of clinical measurement. It yields the repeatability, i.e. the value within which 95 per cent of repeat measurements lie. The valid use of the technique requires that the variance is constant over the data range. This is not usually the case for counts of items such as CD4 cells or parasites, nor is the log transformation applicable to zero counts. We investigate the properties of generalized differences based on Box-Cox transformations. For an example, in a data set of hookworm eggs counted by the Kato-Katz method, the square root transformation is found to stabilize the variance. We show how to back-transform the repeatability on the square root scale to the repeatability of the counts themselves, as an increasing function of the square mean root egg count, i.e. the square of the average of square roots. As well as being more easily interpretable, the back-transformed results highlight the dependence of the repeatability on the sample volume used.

  16. Extending Differential Fault Analysis to Dynamic S-Box Advanced Encryption Standard Implementations

    DTIC Science & Technology

    2014-09-18

    entropy . At the same time, researchers strive to enhance AES and mitigate these growing threats. This paper researches the extension of existing...the algorithm or use side channels to reduce entropy , such as Differential Fault Analysis (DFA). At the same time, continuing research strives to...the state matrix. The S-box is an 8-bit 16x16 table built from an affine transformation on multiplicative inverses which guarantees full permutation (S

  17. Fractal Analyses of High-Resolution Cloud Droplet Measurements.

    NASA Astrophysics Data System (ADS)

    Malinowski, Szymon P.; Leclerc, Monique Y.; Baumgardner, Darrel G.

    1994-02-01

    Fractal analyses of individual cloud droplet distributions using aircraft measurements along one-dimensional horizontal cross sections through clouds are performed. Box counting and cluster analyses are used to determine spatial scales of inhomogeneity of cloud droplet spacing. These analyses reveal that droplet spatial distributions do not exhibit a fractal behavior. A high variability in local droplet concentration in cloud volumes undergoing mixing was found. In these regions, thin filaments of cloudy air with droplet concentration close to those observed in cloud cores were found. Results suggest that these filaments may be anisotropic. Additional box counting analyses performed for various classes of cloud droplet diameters indicate that large and small droplets are similarly distributed, except for the larger characteristic spacing of large droplets.A cloud-clear air interface defined by a certain threshold of total droplet count (TDC) was investigated. There are indications that this interface is a convoluted surface of a fractal nature, at least in actively developing cumuliform clouds. In contrast, TDC in the cloud interior does not have fractal or multifractal properties. Finally a random Cantor set (RCS) was introduced as a model of a fractal process with an ill-defined internal scale. A uniform measure associated with the RCS after several generations was introduced to simulate the TDC records. Comparison of the model with real TDC records indicates similar properties of both types of data series.

  18. Data Mining and Machine Learning in Astronomy

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Brunner, Robert J.

    We review the current state of data mining and machine learning in astronomy. Data Mining can have a somewhat mixed connotation from the point of view of a researcher in this field. If used correctly, it can be a powerful approach, holding the potential to fully exploit the exponentially increasing amount of available data, promising great scientific advance. However, if misused, it can be little more than the black box application of complex computing algorithms that may give little physical insight, and provide questionable results. Here, we give an overview of the entire data mining process, from data collection through to the interpretation of results. We cover common machine learning algorithms, such as artificial neural networks and support vector machines, applications from a broad range of astronomy, emphasizing those in which data mining techniques directly contributed to improving science, and important current and future directions, including probability density functions, parallel algorithms, Peta-Scale computing, and the time domain. We conclude that, so long as one carefully selects an appropriate algorithm and is guided by the astronomical problem at hand, data mining can be very much the powerful tool, and not the questionable black box.

  19. 77 FR 67261 - Airworthiness Directives; Aeronautical Accessories, Inc., High Landing Gear Forward Crosstube...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-09

    ... assemblies (crosstubes) installed on Agusta S.p.A. (Agusta) Model AB412 and AB412EP; and Bell Helicopter... based on a supplemental type certificate (STC). This AD requires counting and recording the total number.... ADDRESSES: For service information identified in this AD, contact Aeronautical Accessories, Inc., P.O. Box...

  20. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 2

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noel M.

    1990-01-01

    It is shown how the look-ahead Lanczos process (combined with a quasi-minimal residual QMR) approach) can be used to develop a robust black box solver for large sparse non-Hermitian linear systems. Details of an implementation of the resulting QMR algorithm are presented. It is demonstrated that the QMR method is closely related to the biconjugate gradient (BCG) algorithm; however, unlike BCG, the QMR algorithm has smooth convergence curves and good numerical properties. We report numerical experiments with our implementation of the look-ahead Lanczos algorithm, both for eigenvalue problem and linear systems. Also, program listings of FORTRAN implementations of the look-ahead algorithm and the QMR method are included.

  1. Assessment of the Accountability and Control of Arms, Ammunition, and Explosives (AA&E) Provided to the Security Forces of Afghanistan

    DTIC Science & Technology

    2009-09-11

    Corps to verify weapons and ammunition accountability. It conducted a serial number inventory of 67 weapons, including ten M16 rifles, 40 AK47 ...11. Mossberg M590A1 pump 12 gauge Former Warsaw Pact Weapons (FWP) 1. AK47 rifle and variants ( AK47 , AK74, AMD65, VZ58, AKMS) 2. RPK light machine...recorded as EL 4910  AK47 : 30 counted o Serial No. IA0499 recorded as IA0489  VZ58: 90 counted o 2 weapons were in the wrong boxes

  2. The Correlation Fractal Dimension of Complex Networks

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Liu, Zhenzhen; Wang, Mogei

    2013-05-01

    The fractality of complex networks is studied by estimating the correlation dimensions of the networks. Comparing with the previous algorithms of estimating the box dimension, our algorithm achieves a significant reduction in time complexity. For four benchmark cases tested, that is, the Escherichia coli (E. Coli) metabolic network, the Homo sapiens protein interaction network (H. Sapiens PIN), the Saccharomyces cerevisiae protein interaction network (S. Cerevisiae PIN) and the World Wide Web (WWW), experiments are provided to demonstrate the validity of our algorithm.

  3. A dynamically adaptive multigrid algorithm for the incompressible Navier-Stokes equations: Validation and model problems

    NASA Technical Reports Server (NTRS)

    Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.

    1991-01-01

    An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.

  4. CombiMotif: A new algorithm for network motifs discovery in protein-protein interaction networks

    NASA Astrophysics Data System (ADS)

    Luo, Jiawei; Li, Guanghui; Song, Dan; Liang, Cheng

    2014-12-01

    Discovering motifs in protein-protein interaction networks is becoming a current major challenge in computational biology, since the distribution of the number of network motifs can reveal significant systemic differences among species. However, this task can be computationally expensive because of the involvement of graph isomorphic detection. In this paper, we present a new algorithm (CombiMotif) that incorporates combinatorial techniques to count non-induced occurrences of subgraph topologies in the form of trees. The efficiency of our algorithm is demonstrated by comparing the obtained results with the current state-of-the art subgraph counting algorithms. We also show major differences between unicellular and multicellular organisms. The datasets and source code of CombiMotif are freely available upon request.

  5. Reliable Detection and Smart Deletion of Malassez Counting Chamber Grid in Microscopic White Light Images for Microbiological Applications.

    PubMed

    Denimal, Emmanuel; Marin, Ambroise; Guyot, Stéphane; Journaux, Ludovic; Molin, Paul

    2015-08-01

    In biology, hemocytometers such as Malassez slides are widely used and are effective tools for counting cells manually. In a previous work, a robust algorithm was developed for grid extraction in Malassez slide images. This algorithm was evaluated on a set of 135 images and grids were accurately detected in most cases, but there remained failures for the most difficult images. In this work, we present an optimization of this algorithm that allows for 100% grid detection and a 25% improvement in grid positioning accuracy. These improvements make the algorithm fully reliable for grid detection. This optimization also allows complete erasing of the grid without altering the cells, which eases their segmentation.

  6. A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones.

    PubMed

    Kang, Xiaomin; Huang, Baoqi; Qi, Guodong

    2018-01-19

    Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.

  7. A flexible system to capture sample vials in a storage box - the box vial scanner.

    PubMed

    Nowakowski, Steven E; Kressin, Kenneth R; Deick, Steven D

    2009-01-01

    Tracking sample vials in a research environment is a critical task and doing so efficiently can have a large impact on productivity, especially in high volume laboratories. There are several challenges to automating the capture process, including the variety of containers used to store samples. We developed a fast and robust system to capture the location of sample vials being placed in storage that allows the laboratories the flexibility to use sample containers of varying dimensions. With a single scan, this device captures the box identifier, the vial identifier and the location of each vial within a freezer storage box. The sample vials are tracked through a barcode label affixed to the cap while the boxes are tracked by a barcode label on the side of the box. Scanning units are placed at the point of use and forward data to a sever application for processing the scanned data. Scanning units consist of an industrial barcode reader mounted in a fixture positioning the box for scanning and providing lighting during the scan. The server application transforms the scan data into a list of storage locations holding vial identifiers. The list is then transferred to the laboratory database. The box vial scanner captures the IDs and location information for an entire box of sample vials into the laboratory database in a single scan. The system accommodates a wide variety of vials sizes by inserting risers under the sample box and a variety of storage box layouts are supported via the processing algorithm on the server.

  8. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions.

    PubMed

    Grootjans, Willem; Meeuwis, Antoi P W; Slump, Cornelis H; de Geus-Oei, Lioe-Fee; Gotthardt, Martin; Visser, Eric P

    2016-12-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4.2, respectively. Regularization with smoothing priors could suppress these noise patterns at the cost of reduced image contrast. The mean N% was 6.4% and 6.8% for low count QSP and MRP MAP reconstructed images. Alternatively, regularization with an anatomical Bowhser prior resulted in sharp images with high contrast, limited image distortion, and low N% of 8.3% in low count images, although some image artifacts did occur. Analysis of clinical images suggested that the same effects occur in clinical imaging. Image quality of low count SPECT acquisitions reconstructed with modern 3DOSEM algorithms is deteriorated by the occurrence of correlated noise patterns and image distortions. The artifacts observed in the phantom experiments can also occur in clinical imaging. Copyright © 2015. Published by Elsevier GmbH.

  9. An automated approach for annual layer counting in ice cores

    NASA Astrophysics Data System (ADS)

    Winstrup, M.; Svensson, A.; Rasmussen, S. O.; Winther, O.; Steig, E.; Axelrod, A.

    2012-04-01

    The temporal resolution of some ice cores is sufficient to preserve seasonal information in the ice core record. In such cases, annual layer counting represents one of the most accurate methods to produce a chronology for the core. Yet, manual layer counting is a tedious and sometimes ambiguous job. As reliable layer recognition becomes more difficult, a manual approach increasingly relies on human interpretation of the available data. Thus, much may be gained by an automated and therefore objective approach for annual layer identification in ice cores. We have developed a novel method for automated annual layer counting in ice cores, which relies on Bayesian statistics. It uses algorithms from the statistical framework of Hidden Markov Models (HMM), originally developed for use in machine speech recognition. The strength of this layer detection algorithm lies in the way it is able to imitate the manual procedures for annual layer counting, while being based on purely objective criteria for annual layer identification. With this methodology, it is possible to determine the most likely position of multiple layer boundaries in an entire section of ice core data at once. It provides a probabilistic uncertainty estimate of the resulting layer count, hence ensuring a proper treatment of ambiguous layer boundaries in the data. Furthermore multiple data series can be incorporated to be used at once, hence allowing for a full multi-parameter annual layer counting method similar to a manual approach. In this study, the automated layer counting algorithm has been applied to data from the NGRIP ice core, Greenland. The NGRIP ice core has very high temporal resolution with depth, and hence the potential to be dated by annual layer counting far back in time. In previous studies [Andersen et al., 2006; Svensson et al., 2008], manual layer counting has been carried out back to 60 kyr BP. A comparison between the counted annual layers based on the two approaches will be presented and their differences discussed. Within the estimated uncertainties, the two methodologies agree. This shows the potential for a fully automated annual layer counting method to be operational for data sections where the annual layering is unknown.

  10. Health assessment of free-living eastern box turtles (Terrapene carolina carolina) in and around the Maryland Zoo in Baltimore 1996-2011.

    PubMed

    Adamovicz, Laura; Bronson, Ellen; Barrett, Kevin; Deem, Sharon L

    2015-03-01

    Health data for free-living eastern box turtles (Terrapene carolina carolina) at the Maryland Zoo in Baltimore were analyzed. One hundred and eighteen turtles were captured on or near zoo grounds over the course of 15 yr (1996-2011), with recapture of many individuals leading to 208 total evaluations. Of the 118 individuals, 61 were male, 50 were female, and 7 were of undetermined sex. Of the 208 captures, 188 were healthy, and 20 were sick or injured. Complete health evaluations were performed on 30 turtles with physical examination records, complete blood counts (CBCs), and plasma biochemistry profiles. Eight animals were sampled more than once, yielding 40 total samples for complete health evaluations of these 30 individuals. The 40 samples were divided into healthy (N=29) and sick (N=11) groups based on clinical findings on physical examination. Samples from healthy animals were further divided into male (N=17) and female (N= 12) groups. CBC and biochemistry profile parameters were compared between sick and healthy groups and between healthy males and females. Sick turtles had lower albumin, globulin, total protein (TP), calcium, phosphorous, sodium, and potassium than healthy animals. Sick turtles also had higher heterophil to lymphocyte ratios. Healthy female turtles had higher leukocyte count, eosinophil count, total solids, TP, globulin, cholesterol, calcium, and phosphorous than healthy males. Banked plasma from all 40 samples was tested for antibodies to Mycoplasma agassizii and Mycoplasma testudineum via enzyme-linked immunosorbent assay. One sample from a clinically healthy female was antibody positive for M. agassizii; none were positive for M. testudineum. This study provides descriptive health data for eastern box turtles and CBC and biochemistry profile information for T. carolina carolina at and near the Maryland Zoo in Baltimore. It also reports low serologic evidence of exposure to mycoplasmosis.

  11. SU-D-BRA-04: Fractal Dimension Analysis of Edge-Detected Rectal Cancer CTs for Outcome Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Wang, J; Hu, W

    2015-06-15

    Purpose: To extract the fractal dimension features from edge-detected rectal cancer CTs, and to examine the predictability of fractal dimensions to outcomes of primary rectal cancer patients. Methods: Ninety-seven rectal cancer patients treated with neo-adjuvant chemoradiation were enrolled in this study. CT images were obtained before chemoradiotherapy. The primary lesions of the rectal cancer were delineated by experienced radiation oncologists. These images were extracted and filtered by six different Laplacian of Gaussian (LoG) filters with different filter values (0.5–3.0: from fine to coarse) to achieve primary lesions in different anatomical scales. Edges of the original images were found at zero-crossingsmore » of the filtered images. Three different fractal dimensions (box-counting dimension, Minkowski dimension, mass dimension) were calculated upon the image slice with the largest cross-section of the primary lesion. The significance of these fractal dimensions in survival, recurrence and metastasis were examined by Student’s t-test. Results: For a follow-up time of two years, 18 of 97 patients had experienced recurrence, 24 had metastasis, and 18 were dead. Minkowski dimensions under large filter values (2.0, 2.5, 3.0) were significantly larger (p=0.014, 0.006, 0.015) in patients with recurrence than those without. For metastasis, only box-counting dimensions under a single filter value (2.5) showed differences (p=0.016) between patients with and without. For overall survival, box-counting dimensions (filter values = 0.5, 1.0, 1.5), Minkowski dimensions (filter values = 0.5, 1.5, 2.0, 2,5) and mass dimensions (filter values = 1.5, 2.0) were all significant (p<0.05). Conclusion: It is feasible to extract shape information by edge detection and fractal dimensions analysis in neo-adjuvant rectal cancer patients. This information can be used to prognosis prediction.« less

  12. Dairy Tool Box Talks: A Comprehensive Worker Training in Dairy Farming.

    PubMed

    Rovai, Maristela; Carroll, Heidi; Foos, Rebecca; Erickson, Tracey; Garcia, Alvaro

    2016-01-01

    Today's dairies are growing rapidly, with increasing dependence on Latino immigrant workers. This requires new educational strategies for improving milk quality and introduction to state-of-the-art dairy farming practices. It also creates knowledge gaps pertaining to the health of animals and workers, mainly due to the lack of time and language barriers. Owners, managers, and herdsmen assign training duties to more experienced employees, which may not promote "best practices" and may perpetuate bad habits. A comprehensive and periodic training program administered by qualified personnel is currently needed and will enhance the sustainability of the dairy industry. Strategic management and employee satisfaction will be achieved through proper training in the employee's language, typically Spanish. The training needs to address not only current industry standards but also social and cultural differences. An innovative training course was developed following the same structure used by the engineering and construction industries, giving farm workers basic understanding of animal care and handling, cow comfort, and personal safety. The "Dairy Tool Box Talks" program was conducted over a 10-week period with nine sessions according to farm's various employee work shifts. Bulk milk bacterial counts and somatic cell counts were used to evaluate milk quality on the three dairy farms participating in the program. "Dairy Tool Box Talks" resulted in a general sense of employee satisfaction, significant learning outcomes, and enthusiasm about the topics covered. We conclude this article by highlighting the importance of educational programs aimed at improving overall cross-cultural training.

  13. Optical Guidance for a Robotic Submarine

    NASA Astrophysics Data System (ADS)

    Schulze, Karl R.; LaFlash, Chris

    2002-11-01

    There is a need for autonomous submarines that can quickly and safely complete jobs, such as the recovery of a downed aircraft's black box recorder. In order to complete this feat, it is necessary to use an optical processing algorithm that distinguishes a desired target and uses the feedback from the algorithm to retrieve the target. The algorithm itself uses many bit mask filters for particle information, and then uses a unique rectation method in order to resolve complete objects. The algorithm has been extensively tested on an AUV platform, and proven to succeed repeatedly in approximately five or more feet of water clarity.

  14. GPS-ARM: Computational Analysis of the APC/C Recognition Motif by Predicting D-Boxes and KEN-Boxes

    PubMed Central

    Ren, Jian; Cao, Jun; Zhou, Yanhong; Yang, Qing; Xue, Yu

    2012-01-01

    Anaphase-promoting complex/cyclosome (APC/C), an E3 ubiquitin ligase incorporated with Cdh1 and/or Cdc20 recognizes and interacts with specific substrates, and faithfully orchestrates the proper cell cycle events by targeting proteins for proteasomal degradation. Experimental identification of APC/C substrates is largely dependent on the discovery of APC/C recognition motifs, e.g., the D-box and KEN-box. Although a number of either stringent or loosely defined motifs proposed, these motif patterns are only of limited use due to their insufficient powers of prediction. We report the development of a novel GPS-ARM software package which is useful for the prediction of D-boxes and KEN-boxes in proteins. Using experimentally identified D-boxes and KEN-boxes as the training data sets, a previously developed GPS (Group-based Prediction System) algorithm was adopted. By extensive evaluation and comparison, the GPS-ARM performance was found to be much better than the one using simple motifs. With this powerful tool, we predicted 4,841 potential D-boxes in 3,832 proteins and 1,632 potential KEN-boxes in 1,403 proteins from H. sapiens, while further statistical analysis suggested that both the D-box and KEN-box proteins are involved in a broad spectrum of biological processes beyond the cell cycle. In addition, with the co-localization information, we predicted hundreds of mitosis-specific APC/C substrates with high confidence. As the first computational tool for the prediction of APC/C-mediated degradation, GPS-ARM is a useful tool for information to be used in further experimental investigations. The GPS-ARM is freely accessible for academic researchers at: http://arm.biocuckoo.org. PMID:22479614

  15. Operators manual for the magnetograph program (section 2)

    NASA Technical Reports Server (NTRS)

    November, L.; Title, A. M.

    1974-01-01

    This manual for use of the magnetograph program describes: (1) black box use of the programs; (2) the magtape data formats used; (3) the adjustable control parameters in the program; and (4) the algorithms. With no adjustments on the control parameters this program may be used purely as a black box. For optimal use, however, the control parameters may be varied. The magtape data formats are of use in adopting other programs to look at raw data or final magnetograph data.

  16. 3D Structure of Tillage Soils

    NASA Astrophysics Data System (ADS)

    González-Torre, Iván; Losada, Juan Carlos; Falconer, Ruth; Hapca, Simona; Tarquis, Ana M.

    2015-04-01

    Soil structure may be defined as the spatial arrangement of soil particles, aggregates and pores. The geometry of each one of these elements, as well as their spatial arrangement, has a great influence on the transport of fluids and solutes through the soil. Fractal/Multifractal methods have been increasingly applied to quantify soil structure thanks to the advances in computer technology (Tarquis et al., 2003). There is no doubt that computed tomography (CT) has provided an alternative for observing intact soil structure. These CT techniques reduce the physical impact to sampling, providing three-dimensional (3D) information and allowing rapid scanning to study sample dynamics in near real-time (Houston et al., 2013a). However, several authors have dedicated attention to the appropriate pore-solid CT threshold (Elliot and Heck, 2007; Houston et al., 2013b) and the better method to estimate the multifractal parameters (Grau et al., 2006; Tarquis et al., 2009). The aim of the present study is to evaluate the effect of the algorithm applied in the multifractal method (box counting and box gliding) and the cube size on the calculation of generalized fractal dimensions (Dq) in grey images without applying any threshold. To this end, soil samples were extracted from different areas plowed with three tools (moldboard, chissel and plow). Soil samples for each of the tillage treatment were packed into polypropylene cylinders of 8 cm diameter and 10 cm high. These were imaged using an mSIMCT at 155keV and 25 mA. An aluminium filter (0.25 mm) was applied to reduce beam hardening and later several corrections where applied during reconstruction. References Elliot, T.R. and Heck, R.J. 2007. A comparison of 2D and 3D thresholding of CT imagery. Can. J. Soil Sci., 87(4), 405-412. Grau, J, Médez, V.; Tarquis, A.M., Saa, A. and Díaz, M.C.. 2006. Comparison of gliding box and box-counting methods in soil image analysis. Geoderma, 134, 349-359. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Houston, A.N.; S. Schmidt, A.M. Tarquis, W. Otten, P.C. Baveye, S.M. Hapca. Effect of scanning and image reconstruction settings in X-ray computed tomography on soil image quality and segmentation performance. Geoderma, 207-208, 154-165, 2013a. Houston, A, Otten, W., Baveye, Ph., Hapca, S. Adaptive-Window Indicator Kriging: A Thresholding Method for Computed Tomography, Computers & Geosciences, 54, 239-248, 2013b. Tarquis, A.M., R.J. Heck, D. Andina, A. Alvarez and J.M. Antón. Multifractal analysis and thresholding of 3D soil images. Ecological Complexity, 6, 230-239, 2009. Tarquis, A.M.; D. Giménez, A. Saa, M.C. Díaz. and J.M. Gascó. Scaling and Multiscaling of Soil Pore Systems Determined by Image Analysis. Scaling Methods in Soil Systems. Pachepsky, Radcliffe and Selim Eds., 19-33, 2003. CRC Press, Boca Ratón, Florida. Acknowledgements First author acknowledges the financial support obtained from Soil Imaging Laboratory (University of Gueplh, Canada) in 2014.

  17. The suppression effect of a periodic surface with semicircular grooves on the high power microwave long pill-box window multipactor phenomenon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xue, E-mail: zhangxue.iecas@yahoo.com; Wang, Yong; Fan, Junjie

    2014-09-15

    To improve the transmitting power in an S-band klystron, a long pill-box window that has a disk with grooves with a semicircular cross section is theoretically investigated and simulated. A Monte-Carlo algorithm is used to track the secondary electron trajectories and analyze the multipactor scenario in the long pill-box window and on the grooved surface. Extending the height of the long-box window can decrease the normal electric field on the surface of the window disk, but the single surface multipactor still exists. It is confirmed that the window disk with periodic semicircular grooves can explicitly suppress the multipactor and predominantlymore » depresses the local field enhancement and the bottom continuous multipactor. The difference between semicircular and sharp boundary grooves is clarified numerically and analytically.« less

  18. Parallel image logical operations using cross correlation

    NASA Technical Reports Server (NTRS)

    Strong, J. P., III

    1972-01-01

    Methods are presented for counting areas in an image in a parallel manner using noncoherent optical techniques. The techniques presented include the Levialdi algorithm for counting, optical techniques for binary operations, and cross-correlation.

  19. Characterization of the Photon Counting CHASE Jr., Chip Built in a 40-nm CMOS Process With a Charge Sharing Correction Algorithm Using a Collimated X-Ray Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krzyżanowska, A.; Deptuch, G. W.; Maj, P.

    This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operationmore » of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.« less

  20. Automatic counting and classification of bacterial colonies using hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    Detection and counting of bacterial colonies on agar plates is a routine microbiology practice to get a rough estimate of the number of viable cells in a sample. There have been a variety of different automatic colony counting systems and software algorithms mainly based on color or gray-scale pictu...

  1. Lifetime Prediction of IGBT in a STATCOM Using Modified-Graphical Rainflow Counting Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak

    Rainflow algorithms are one of the best counting methods used in fatigue and failure analysis [17]. There have been many approaches to the rainflow algorithm, some proposing modifications. Graphical Rainflow Method (GRM) was proposed recently with a claim of faster execution times [10]. However, the steps of the graphical method of rainflow algorithm, when implemented, do not generate the same output as the four-point or ASTM standard algorithm. A modified graphical method is presented and discussed in this paper to overcome the shortcomings of graphical rainflow algorithm. A fast rainflow algorithm based on four-point algorithm but considering point comparison thanmore » range comparison is also presented. A comparison between the performances of the common rainflow algorithms [6-10], including the proposed methods, in terms of execution time, memory used, and efficiency, complexity, and load sequences is presented. Finally, the rainflow algorithm is applied to temperature data of an IGBT in assessing the lifetime of a STATCOM operating for power factor correction of the load. From 5-minute data load profiles available, the lifetime is estimated to be at 3.4 years.« less

  2. Modeling and new equipment definition for the vibration isolation box equipment system

    NASA Technical Reports Server (NTRS)

    Sani, Robert L.

    1993-01-01

    Our MSAD-funded research project is to provide numerical modeling support for the VIBES (Vibration Isolation Box Experiment System) which is an IML2 flight experiment being built by the Japanese research team of Dr. H. Azuma of the Japanese National Aerospace Laboratory. During this reporting period, the following have been accomplished: A semi-consistent mass finite element projection algorithm for 2D and 3D Boussinesq flows has been implemented on Sun, HP And Cray Platforms. The algorithm has better phase speed accuracy than similar finite difference or lumped mass finite element algorithms, an attribute which is essential for addressing realistic g-jitter effects as well as convectively-dominated transient systems. The projection algorithm has been benchmarked against solutions generated via the commercial code FIDAP. The algorithm appears to be accurate as well as computationally efficient. Optimization and potential parallelization studies are underway. Our implementation to date has focused on execution of the basic algorithm with at most a concern for vectorization. The initial time-varying gravity Boussinesq flow simulation is being set up. The mesh is being designed and the input file is being generated. Some preliminary 'small mesh' cases will be attempted on our HP9000/735 while our request to MSAD for supercomputing resources is being addressed. The Japanese research team for VIBES was visited, the current set up and status of the physical experiment was obtained and ongoing E-Mail communication link was established.

  3. Arraycount, an algorithm for automatic cell counting in microwell arrays.

    PubMed

    Kachouie, Nezamoddin; Kang, Lifeng; Khademhosseini, Ali

    2009-09-01

    Microscale technologies have emerged as a powerful tool for studying and manipulating biological systems and miniaturizing experiments. However, the lack of software complementing these techniques has made it difficult to apply them for many high-throughput experiments. This work establishes Arraycount, an approach to automatically count cells in microwell arrays. The procedure consists of fluorescent microscope imaging of cells that are seeded in microwells of a microarray system and then analyzing images via computer to recognize the array and count cells inside each microwell. To start counting, green and red fluorescent images (representing live and dead cells, respectively) are extracted from the original image and processed separately. A template-matching algorithm is proposed in which pre-defined well and cell templates are matched against the red and green images to locate microwells and cells. Subsequently, local maxima in the correlation maps are determined and local maxima maps are thresholded. At the end, the software records the cell counts for each detected microwell on the original image in high-throughput. The automated counting was shown to be accurate compared with manual counting, with a difference of approximately 1-2 cells per microwell: based on cell concentration, the absolute difference between manual and automatic counting measurements was 2.5-13%.

  4. Lana chimpanzee learns to count by 'numath' - A summary of a videotaped experimental report

    NASA Technical Reports Server (NTRS)

    Rumbaugh, Duane M.; Hopkins, William D.; Washburn, David A.; Savage-Rumbaugh, E. Sue

    1989-01-01

    Computerized training programs whereby an adult female chimpanzee learned to use a joystick to remove from a screen the number of boxes appropriate to the value of a randomly selected Arabic numeral 1, 2, or 3, are studied. Initial training provided a variety of cues, both numeric and otherwise, to support correct performance. In the final test, the ape was correct on over 80 percent of trials in which there was no residual feedback of intratrial events and where only her memory of those events could provide the cue to indicate that she had removed boxes in accordance with the value of the target numbers and should terminate the trial.

  5. Application of affinity propagation algorithm based on manifold distance for transformer PD pattern recognition

    NASA Astrophysics Data System (ADS)

    Wei, B. G.; Huo, K. X.; Yao, Z. F.; Lou, J.; Li, X. Y.

    2018-03-01

    It is one of the difficult problems encountered in the research of condition maintenance technology of transformers to recognize partial discharge (PD) pattern. According to the main physical characteristics of PD, three models of oil-paper insulation defects were set up in laboratory to study the PD of transformers, and phase resolved partial discharge (PRPD) was constructed. By using least square method, the grey-scale images of PRPD were constructed and features of each grey-scale image were 28 box dimensions and 28 information dimensions. Affinity propagation algorithm based on manifold distance (AP-MD) for transformers PD pattern recognition was established, and the data of box dimension and information dimension were clustered based on AP-MD. Study shows that clustering result of AP-MD is better than the results of affinity propagation (AP), k-means and fuzzy c-means algorithm (FCM). By choosing different k values of k-nearest neighbor, we find clustering accuracy of AP-MD falls when k value is larger or smaller, and the optimal k value depends on sample size.

  6. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  7. Combined steam-ultrasound treatment of 2 seconds achieves significant high aerobic count and Enterobacteriaceae reduction on naturally contaminated food boxes, crates, conveyor belts, and meat knives.

    PubMed

    Musavian, Hanieh S; Butt, Tariq M; Larsen, Annette Baltzer; Krebs, Niels

    2015-02-01

    Food contact surfaces require rigorous sanitation procedures for decontamination, although these methods very often fail to efficiently clean and disinfect surfaces that are visibly contaminated with food residues and possible biofilms. In this study, the results of a short treatment (1 to 2 s) of combined steam (95°C) and ultrasound (SonoSteam) of industrial fish and meat transportation boxes and live-chicken transportation crates naturally contaminated with food and fecal residues were investigated. Aerobic counts of 5.0 to 6.0 log CFU/24 cm(2) and an Enterobacteriaceae spp. level of 2.0 CFU/24 cm(2) were found on the surfaces prior to the treatment. After 1 s of treatment, the aerobic counts were significantly (P < 0.0001) reduced, and within 2 s, reductions below the detection limit (<10 CFU) were reached. Enterobacteriaceae spp. were reduced to a level below the detection limit with only 1 s of treatment. Two seconds of steam-ultrasound treatment was also applied on two different types of plastic modular conveyor belts with hinge pins and one type of flat flexible rubber belt, all visibly contaminated with food residues. The aerobic counts of 3.0 to 5.0 CFU/50 cm(2) were significantly (P < 0.05) reduced, while Enterobacteriaceae spp. were reduced to a level below the detection limit. Industrial meat knives were contaminated with aerobic counts of 6.0 log CFU/5 cm(2) on the handle and 5.2 log CFU/14 cm(2) on the steel. The level of Enterobacteriaceae spp. contamination was approximately 2.5 log CFU on the handle and steel. Two seconds of steam-ultrasound treatment reduced the aerobic counts and Enterobacteriaceae spp. to levels below the detection limit on both handle and steel. This study shows that the steam-ultrasound treatment may be an effective replacement for disinfection processes and that it can be used for continuous disinfection at fast process lines. However, the treatment may not be able to replace efficient cleaning processes used to remove high loads of debris.

  8. In-Situ Assays Using a New Advanced Mathematical Algorithm - 12400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oginni, B.M.; Bronson, F.L.; Field, M.B.

    2012-07-01

    Current mathematical efficiency modeling software for in-situ counting, such as the commercially available In-Situ Object Calibration Software (ISOCS), typically allows the description of measurement geometries via a list of well-defined templates which describe regular objects, such as boxes, cylinder, or spheres. While for many situations, these regular objects are sufficient to describe the measurement conditions, there are occasions in which a more detailed model is desired. We have developed a new all-purpose geometry template that can extend the flexibility of current ISOCS templates. This new template still utilizes the same advanced mathematical algorithms as current templates, but allows the extensionmore » to a multitude of shapes and objects that can be placed at any location and even combined. In addition, detectors can be placed anywhere and aimed at any location within the measurement scene. Several applications of this algorithm to in-situ waste assay measurements, as well as, validations of this template using Monte Carlo calculations and experimental measurements are studied. Presented in this paper is a new template of the mathematical algorithms for evaluating efficiencies. This new template combines all the advantages of the ISOCS and it allows the use of very complex geometries, it also allows stacking of geometries on one another in the same measurement scene and it allows the detector to be placed anywhere in the measurement scene and pointing in any direction. We have shown that the template compares well with the previous ISOCS software within the limit of convergence of the code, and also compare well with the MCNPX and measured data within the joint uncertainties for the code and the data. The new template agrees with ISOCS to within 1.5% at all energies. It agrees with the MCNPX to within 10% at all energies and it agrees with most geometries within 5%. It finally agrees with measured data to within 10%. This mathematical algorithm can now be used for quickly and accurately evaluating efficiencies for wider range of gamma-ray spectroscopy applications. (authors)« less

  9. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  10. Material and Thickness Grading for Aeroelastic Tailoring of the Common Research Model Wing Box

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.; Jutte, Christine V.

    2014-01-01

    This work quantifies the potential aeroelastic benefits of tailoring a full-scale wing box structure using tailored thickness distributions, material distributions, or both simultaneously. These tailoring schemes are considered for the wing skins, the spars, and the ribs. Material grading utilizes a spatially-continuous blend of two metals: Al and Al+SiC. Thicknesses and material fraction variables are specified at the 4 corners of the wing box, and a bilinear interpolation is used to compute these parameters for the interior of the planform. Pareto fronts detailing the conflict between static aeroelastic stresses and dynamic flutter boundaries are computed with a genetic algorithm. In some cases, a true material grading is found to be superior to a single-material structure.

  11. First-Order Phase Transition in the Quantum Adiabatic Algorithm

    DTIC Science & Technology

    2010-01-14

    London) 400, 133 (1999). [19] T. Jörg, F. Krzakala, G . Semerjian, and F. Zamponi, arXiv:0911.3438. PRL 104, 020502 (2010) P HY S I CA L R EV I EW LE T T E R S week ending 15 JANUARY 2010 020502-4 ...Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Quantum Adiabatic Algorithm, Monte Carlo, Quantum Phase Transition A. P . Young, V...documentation. Approved for public release; distribution is unlimited. ... 56290.2-PH-QC First-Order Phase Transition in the Quantum Adiabatic Algorithm A. P

  12. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    NASA Astrophysics Data System (ADS)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  13. Diversity in Detection Algorithms for Atmospheric Rivers: A Community Effort to Understand the Consequences

    NASA Astrophysics Data System (ADS)

    Shields, C. A.; Ullrich, P. A.; Rutz, J. J.; Wehner, M. F.; Ralph, M.; Ruby, L.

    2017-12-01

    Atmospheric rivers (ARs) are long, narrow filamentary structures that transport large amounts of moisture in the lower layers of the atmosphere, typically from subtropical regions to mid-latitudes. ARs play an important role in regional hydroclimate by supplying significant amounts of precipitation that can alleviate drought, or in extreme cases, produce dangerous floods. Accurately detecting, or tracking, ARs is important not only for weather forecasting, but is also necessary to understand how these events may change under global warming. Detection algorithms are used on both regional and global scales, and most accurately, using high resolution datasets, or model output. Different detection algorithms can produce different answers. Detection algorithms found in the current literature fall broadly into two categories: "time-stitching", where the AR is tracked with a Lagrangian approach through time and space; and "counting", where ARs are identified for a single point in time for a single location. Counting routines can be further subdivided into algorithms that use absolute thresholds with specific geometry, to algorithms that use relative thresholds, to algorithms based on statistics, to pattern recognition and machine learning techniques. With such a large diversity in detection code, differences in AR tracking and "counts" can vary widely from technique to technique. Uncertainty increases for future climate scenarios, where the difference between relative and absolute thresholding produce vastly different counts, simply due to the moister background state in a warmer world. In an effort to quantify the uncertainty associated with tracking algorithms, the AR detection community has come together to participate in ARTMIP, the Atmospheric River Tracking Method Intercomparison Project. Each participant will provide AR metrics to the greater group by applying their code to a common reanalysis dataset. MERRA2 data was chosen for both temporal and spatial resolution. After completion of this first phase, Tier 1, ARTMIP participants may choose to contribute to Tier 2, which will range from reanalysis uncertainty, to analysis of future climate scenarios from high resolution model output. ARTMIP's experimental design, techniques, and preliminary metrics will be presented.

  14. Fast Algorithms for Mining Co-evolving Time Series

    DTIC Science & Technology

    2011-09-01

    Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical

  15. cWINNOWER algorithm for finding fuzzy dna motifs

    NASA Technical Reports Server (NTRS)

    Liang, S.; Samanta, M. P.; Biegel, B. A.

    2004-01-01

    The cWINNOWER algorithm detects fuzzy motifs in DNA sequences rich in protein-binding signals. A signal is defined as any short nucleotide pattern having up to d mutations differing from a motif of length l. The algorithm finds such motifs if a clique consisting of a sufficiently large number of mutated copies of the motif (i.e., the signals) is present in the DNA sequence. The cWINNOWER algorithm substantially improves the sensitivity of the winnower method of Pevzner and Sze by imposing a consensus constraint, enabling it to detect much weaker signals. We studied the minimum detectable clique size qc as a function of sequence length N for random sequences. We found that qc increases linearly with N for a fast version of the algorithm based on counting three-member sub-cliques. Imposing consensus constraints reduces qc by a factor of three in this case, which makes the algorithm dramatically more sensitive. Our most sensitive algorithm, which counts four-member sub-cliques, needs a minimum of only 13 signals to detect motifs in a sequence of length N = 12,000 for (l, d) = (15, 4). Copyright Imperial College Press.

  16. cWINNOWER Algorithm for Finding Fuzzy DNA Motifs

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan

    2003-01-01

    The cWINNOWER algorithm detects fuzzy motifs in DNA sequences rich in protein-binding signals. A signal is defined as any short nucleotide pattern having up to d mutations differing from a motif of length l. The algorithm finds such motifs if multiple mutated copies of the motif (i.e., the signals) are present in the DNA sequence in sufficient abundance. The cWINNOWER algorithm substantially improves the sensitivity of the winnower method of Pevzner and Sze by imposing a consensus constraint, enabling it to detect much weaker signals. We studied the minimum number of detectable motifs qc as a function of sequence length N for random sequences. We found that qc increases linearly with N for a fast version of the algorithm based on counting three-member sub-cliques. Imposing consensus constraints reduces qc, by a factor of three in this case, which makes the algorithm dramatically more sensitive. Our most sensitive algorithm, which counts four-member sub-cliques, needs a minimum of only 13 signals to detect motifs in a sequence of length N = 12000 for (l,d) = (15,4).

  17. Estimation of distributed Fermat-point location for wireless sensor networking.

    PubMed

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  18. New methodology for evaluating osteoclastic activity induced by orthodontic load

    PubMed Central

    ARAÚJO, Adriele Silveira; FERNANDES, Alline Birra Nolasco; MACIEL, José Vinicius Bolognesi; NETTO, Juliana de Noronha Santos; BOLOGNESE, Ana Maria

    2015-01-01

    Orthodontic tooth movement (OTM) is a dynamic process of bone modeling involving osteoclast-driven resorption on the compression side. Consequently, to estimate the influence of various situations on tooth movement, experimental studies need to analyze this cell. Objectives The aim of this study was to test and validate a new method for evaluating osteoclastic activity stimulated by mechanical loading based on the fractal analysis of the periodontal ligament (PDL)-bone interface. Material and Methods The mandibular right first molars of 14 rabbits were tipped mesially by a coil spring exerting a constant force of 85 cN. To evaluate the actual influence of osteoclasts on fractal dimension of bone surface, alendronate (3 mg/Kg) was injected weekly in seven of those rabbits. After 21 days, the animals were killed and their jaws were processed for histological evaluation. Osteoclast counts and fractal analysis (by the box counting method) of the PDL-bone interface were performed in histological sections of the right and left sides of the mandible. Results An increase in the number of osteoclasts and in fractal dimension after OTM only happened when alendronate was not administered. Strong correlation was found between the number of osteoclasts and fractal dimension. Conclusions Our results suggest that osteoclastic activity leads to an increase in bone surface irregularity, which can be quantified by its fractal dimension. This makes fractal analysis by the box counting method a potential tool for the assessment of osteoclastic activity on bone surfaces in microscopic examination. PMID:25760264

  19. Singularity analysis based on wavelet transform of fractal measures for identifying geochemical anomaly in mineral exploration

    NASA Astrophysics Data System (ADS)

    Chen, Guoxiong; Cheng, Qiuming

    2016-02-01

    Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.

  20. Spatial Statistics for Tumor Cell Counting and Classification

    NASA Astrophysics Data System (ADS)

    Wirjadi, Oliver; Kim, Yoo-Jin; Breuel, Thomas

    To count and classify cells in histological sections is a standard task in histology. One example is the grading of meningiomas, benign tumors of the meninges, which requires to assess the fraction of proliferating cells in an image. As this process is very time consuming when performed manually, automation is required. To address such problems, we propose a novel application of Markov point process methods in computer vision, leading to algorithms for computing the locations of circular objects in images. In contrast to previous algorithms using such spatial statistics methods in image analysis, the present one is fully trainable. This is achieved by combining point process methods with statistical classifiers. Using simulated data, the method proposed in this paper will be shown to be more accurate and more robust to noise than standard image processing methods. On the publicly available SIMCEP benchmark for cell image analysis algorithms, the cell count performance of the present paper is significantly more accurate than results published elsewhere, especially when cells form dense clusters. Furthermore, the proposed system performs as well as a state-of-the-art algorithm for the computer-aided histological grading of meningiomas when combined with a simple k-nearest neighbor classifier for identifying proliferating cells.

  1. Iris Recognition Using Feature Extraction of Box Counting Fractal Dimension

    NASA Astrophysics Data System (ADS)

    Khotimah, C.; Juniati, D.

    2018-01-01

    Biometrics is a science that is now growing rapidly. Iris recognition is a biometric modality which captures a photo of the eye pattern. The markings of the iris are distinctive that it has been proposed to use as a means of identification, instead of fingerprints. Iris recognition was chosen for identification in this research because every human has a special feature that each individual is different and the iris is protected by the cornea so that it will have a fixed shape. This iris recognition consists of three step: pre-processing of data, feature extraction, and feature matching. Hough transformation is used in the process of pre-processing to locate the iris area and Daugman’s rubber sheet model to normalize the iris data set into rectangular blocks. To find the characteristics of the iris, it was used box counting method to get the fractal dimension value of the iris. Tests carried out by used k-fold cross method with k = 5. In each test used 10 different grade K of K-Nearest Neighbor (KNN). The result of iris recognition was obtained with the best accuracy was 92,63 % for K = 3 value on K-Nearest Neighbor (KNN) method.

  2. Dairy Tool Box Talks: A Comprehensive Worker Training in Dairy Farming

    PubMed Central

    Rovai, Maristela; Carroll, Heidi; Foos, Rebecca; Erickson, Tracey; Garcia, Alvaro

    2016-01-01

    Today’s dairies are growing rapidly, with increasing dependence on Latino immigrant workers. This requires new educational strategies for improving milk quality and introduction to state-of-the-art dairy farming practices. It also creates knowledge gaps pertaining to the health of animals and workers, mainly due to the lack of time and language barriers. Owners, managers, and herdsmen assign training duties to more experienced employees, which may not promote “best practices” and may perpetuate bad habits. A comprehensive and periodic training program administered by qualified personnel is currently needed and will enhance the sustainability of the dairy industry. Strategic management and employee satisfaction will be achieved through proper training in the employee’s language, typically Spanish. The training needs to address not only current industry standards but also social and cultural differences. An innovative training course was developed following the same structure used by the engineering and construction industries, giving farm workers basic understanding of animal care and handling, cow comfort, and personal safety. The “Dairy Tool Box Talks” program was conducted over a 10-week period with nine sessions according to farm’s various employee work shifts. Bulk milk bacterial counts and somatic cell counts were used to evaluate milk quality on the three dairy farms participating in the program. “Dairy Tool Box Talks” resulted in a general sense of employee satisfaction, significant learning outcomes, and enthusiasm about the topics covered. We conclude this article by highlighting the importance of educational programs aimed at improving overall cross-cultural training. PMID:27471726

  3. A Multi-Week Behavioral Sampling Tag for Sound Effects Studies: Design Trade-Offs and Prototype Evaluation

    DTIC Science & Technology

    2014-09-30

    to establish the performance of algorithms detecting dives, strokes , clicks, respiration and gait changes. We have also found that a combination of...whale click count, total click count, vocal duration, SOC2 depth, EOC3 depth) Descent 40 bits (duration, vertical speed, stroke count 0...100 m, stroke count 100-400 m, OBDA4, sum sr35) Bottom 26 bits (movement index6, OBDA, jerk events7, median jerk depth) Ascent

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Dinesh; Thapliyal, Himanshu; Mohammad, Azhar

    Differential Power Analysis (DPA) attack is considered to be a main threat while designing cryptographic processors. In cryptographic algorithms like DES and AES, S-Box is used to indeterminate the relationship between the keys and the cipher texts. However, S-box is prone to DPA attack due to its high power consumption. In this paper, we are implementing an energy-efficient 8-bit S-Box circuit using our proposed Symmetric Pass Gate Adiabatic Logic (SPGAL). SPGAL is energy-efficient as compared to the existing DPAresistant adiabatic and non-adiabatic logic families. SPGAL is energy-efficient due to reduction of non-adiabatic loss during the evaluate phase of the outputs.more » Further, the S-Box circuit implemented using SPGAL is resistant to DPA attacks. The results are verified through SPICE simulations in 180nm technology. SPICE simulations show that the SPGAL based S-Box circuit saves upto 92% and 67% of energy as compared to the conventional CMOS and Secured Quasi-Adiabatic Logic (SQAL) based S-Box circuit. From the simulation results, it is evident that the SPGAL based circuits are energy-efficient as compared to the existing DPAresistant adiabatic and non-adiabatic logic families. In nutshell, SPGAL based gates can be used to build secure hardware for lowpower portable electronic devices and Internet-of-Things (IoT) based electronic devices.« less

  5. Zonal Flows and Long-lived Axisymmetric Pressure Bumps in Magnetorotational Turbulence

    NASA Astrophysics Data System (ADS)

    Johansen, A.; Youdin, A.; Klahr, H.

    2009-06-01

    We study the behavior of magnetorotational turbulence in shearing box simulations with a radial and azimuthal extent up to 10 scale heights. Maxwell and Reynolds stresses are found to increase by more than a factor of 2 when increasing the box size beyond two scale heights in the radial direction. Further increase of the box size has little or no effect on the statistical properties of the turbulence. An inverse cascade excites magnetic field structures at the largest scales of the box. The corresponding 10% variation in the Maxwell stress launches a zonal flow of alternating sub- and super-Keplerian velocity. This, in turn, generates a banded density structure in geostrophic balance between pressure and Coriolis forces. We present a simplified model for the appearance of zonal flows, in which stochastic forcing by the magnetic tension on short timescales creates zonal flow structures with lifetimes of several tens of orbits. We experiment with various improved shearing box algorithms to reduce the numerical diffusivity introduced by the supersonic shear flow. While a standard finite difference advection scheme shows signs of a suppression of turbulent activity near the edges of the box, this problem is eliminated by a new method where the Keplerian shear advection is advanced in time by interpolation in Fourier space.

  6. A study on cow comfort and risk for lameness and mastitis in relation to different types of bedding materials.

    PubMed

    van Gastelen, S; Westerlaan, B; Houwers, D J; van Eerdenburg, F J C M

    2011-10-01

    The aim was to obtain data regarding the effects of 4 freestall bedding materials (i.e., box compost, sand, horse manure, and foam mattresses) on cow comfort and risks for lameness and mastitis. The comfort of freestalls was measured by analyzing the way cows entered the stalls, the duration and smoothness of the descent movement, and the duration of the lying bout. The cleanliness of the cows was evaluated on 3 different body parts: (1) udder, (2) flank, and (3) lower rear legs, and the bacteriological counts of the bedding materials were determined. The combination of the cleanliness of the cows and the bacteriological count of the bedding material provided an estimate of the risk to which dairy cows are exposed in terms of intramammary infections. The results of the hock assessment revealed that the percentage of cows with healthy hocks was lower (20.5 ± 6.7), the percentage of cows with both damaged and swollen hocks was higher (26.8 ± 3.2), and the severity of the damaged hock was higher (2.32 ± 0.17) on farms using foam mattresses compared with deep litter materials [i.e., box compost (64.0 ± 10.4, 3.5 ± 4.7, 1.85 ± 0.23, respectively), sand (54.6 ± 8.2, 2.0 ± 2.8, 1.91 ± 0.09, respectively), and horse manure (54.6 ± 4.5, 5.5 ± 5.4, 1.85 ± 0.17, respectively)]. In addition, cows needed more time to lie down (140.2 ± 84.2s) on farms using foam mattresses compared with the deep litter materials sand and horse manure (sand: 50.1 ± 31.6s, horse manure: 32.9 ± 0.8s). Furthermore, the duration of the lying bout was shorter (47.9 ± 7.4 min) on farms using foam mattresses compared to sand (92.0 ± 12.9 min). These results indicate that deep litter materials provide a more comfortable lying surface compared with foam mattresses. The 3 deep litter bedding materials differed in relation to each other in terms of comfort and their estimate of risk to which cows were exposed in terms of intramammary infections [box compost: 17.8 cfu (1.0(4)) ± 19.4/g; sand: 1.2 cfu (1.0(4)) ± 1.6/g; horse manure: 110.5 cfu (1.0(4)) ± 86.3/g]. Box compost had a low gram-negative bacterial count compared with horse manure, and was associated with less hock injury compared with foam mattresses, but did not improve lying behavior (lying descent duration: 75.6 ± 38.8s, lying bout duration: 46.1 ± 18.5 min). Overall, sand provided the best results, with a comfortable lying surface and a low bacterial count. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Exploiting Small Leakages in Masks to Turn a Second-Order Attack into a First-Order Attack and Improved Rotating Substitution Box Masking with Linear Code Cosets.

    PubMed

    DeTrano, Alexander; Karimi, Naghmeh; Karri, Ramesh; Guo, Xiaofei; Carlet, Claude; Guilley, Sylvain

    2015-01-01

    Masking countermeasures, used to thwart side-channel attacks, have been shown to be vulnerable to mask-extraction attacks. State-of-the-art mask-extraction attacks on the Advanced Encryption Standard (AES) algorithm target S-Box recomputation schemes but have not been applied to scenarios where S-Boxes are precomputed offline. We propose an attack targeting precomputed S-Boxes stored in nonvolatile memory. Our attack targets AES implemented in software protected by a low entropy masking scheme and recovers the masks with 91% success rate. Recovering the secret key requires fewer power traces (in fact, by at least two orders of magnitude) compared to a classical second-order attack. Moreover, we show that this attack remains viable in a noisy environment or with a reduced number of leakage points. Eventually, we specify a method to enhance the countermeasure by selecting a suitable coset of the masks set.

  8. Exploiting Small Leakages in Masks to Turn a Second-Order Attack into a First-Order Attack and Improved Rotating Substitution Box Masking with Linear Code Cosets

    PubMed Central

    DeTrano, Alexander; Karimi, Naghmeh; Karri, Ramesh; Guo, Xiaofei; Carlet, Claude; Guilley, Sylvain

    2015-01-01

    Masking countermeasures, used to thwart side-channel attacks, have been shown to be vulnerable to mask-extraction attacks. State-of-the-art mask-extraction attacks on the Advanced Encryption Standard (AES) algorithm target S-Box recomputation schemes but have not been applied to scenarios where S-Boxes are precomputed offline. We propose an attack targeting precomputed S-Boxes stored in nonvolatile memory. Our attack targets AES implemented in software protected by a low entropy masking scheme and recovers the masks with 91% success rate. Recovering the secret key requires fewer power traces (in fact, by at least two orders of magnitude) compared to a classical second-order attack. Moreover, we show that this attack remains viable in a noisy environment or with a reduced number of leakage points. Eventually, we specify a method to enhance the countermeasure by selecting a suitable coset of the masks set. PMID:26491717

  9. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  10. Disk-based k-mer counting on a PC

    PubMed Central

    2013-01-01

    Background The k-mer counting problem, which is to build the histogram of occurrences of every k-symbol long substring in a given text, is important for many bioinformatics applications. They include developing de Bruijn graph genome assemblers, fast multiple sequence alignment and repeat detection. Results We propose a simple, yet efficient, parallel disk-based algorithm for counting k-mers. Experiments show that it usually offers the fastest solution to the considered problem, while demanding a relatively small amount of memory. In particular, it is capable of counting the statistics for short-read human genome data, in input gzipped FASTQ file, in less than 40 minutes on a PC with 16 GB of RAM and 6 CPU cores, and for long-read human genome data in less than 70 minutes. On a more powerful machine, using 32 GB of RAM and 32 CPU cores, the tasks are accomplished in less than half the time. No other algorithm for most tested settings of this problem and mammalian-size data can accomplish this task in comparable time. Our solution also belongs to memory-frugal ones; most competitive algorithms cannot efficiently work on a PC with 16 GB of memory for such massive data. Conclusions By making use of cheap disk space and exploiting CPU and I/O parallelism we propose a very competitive k-mer counting procedure, called KMC. Our results suggest that judicious resource management may allow to solve at least some bioinformatics problems with massive data on a commodity personal computer. PMID:23679007

  11. Multifractal analysis of white matter structural changes on 3D magnetic resonance imaging between normal aging and early Alzheimer’s disease

    NASA Astrophysics Data System (ADS)

    Ni, Huang-Jing; Zhou, Lu-Ping; Zeng, Peng; Huang, Xiao-Lin; Liu, Hong-Xing; Ning, Xin-Bao

    2015-07-01

    Applications of multifractal analysis to white matter structure changes on magnetic resonance imaging (MRI) have recently received increasing attentions. Although some progresses have been made, there is no evident study on applying multifractal analysis to evaluate the white matter structural changes on MRI for Alzheimer’s disease (AD) research. In this paper, to explore multifractal analysis of white matter structural changes on 3D MRI volumes between normal aging and early AD, we not only extend the traditional box-counting multifractal analysis (BCMA) into the 3D case, but also propose a modified integer ratio based BCMA (IRBCMA) algorithm to compensate for the rigid division rule in BCMA. We verify multifractal characteristics in 3D white matter MRI volumes. In addition to the previously well studied multifractal feature, Δα, we also demonstrated Δf as an alternative and effective multifractal feature to distinguish NC from AD subjects. Both Δα and Δf are found to have strong positive correlation with the clinical MMSE scores with statistical significance. Moreover, the proposed IRBCMA can be an alternative and more accurate algorithm for 3D volume analysis. Our findings highlight the potential usefulness of multifractal analysis, which may contribute to clarify some aspects of the etiology of AD through detection of structural changes in white matter. Project supported by the National Natural Science Foundation of China (Grant No. 61271079), the Vice Chancellor Research Grant in University of Wollongong, and the Priority Academic Program Development of Jiangsu Higher Education Institutions, China.

  12. Investigation of the quality of stored red blood cells after simulated air drop in the maritime environment.

    PubMed

    Meli, Athinoula; Hancock, Vicky; Doughty, Heidi; Smedley, Steve; Cardigan, Rebecca; Wiltshire, Michael

    2018-02-01

    Maritime medical capability may be compromised by blood resupply. Air-dropped red blood cells (RBCs) is a possible mitigation factor. This study set out to evaluate RBC storage variables after a simulated parachute air drop into the sea, as limited data exist. The air load construction for the air drop of blood was subject to static drop assessment to simulate a worst-case parachute drop scenario. One control and two test Golden Hour shipping containers were each packaged with 10 RBC units. The control box was not dropped; Test Boxes 1 and 2 were further reinforced with waterproof boxes and underwent a simulated air drop on Day 7 or Day 8 postdonation, respectively. One day after the drop and once a week thereafter until Day 43 of storage, RBCs from each box were sampled and tested for full blood counts, hemolysis, adenosine triphosphate, 2,3-diphosphoglycerate, pH, extracellular potassium, glucose, lactate, deformability, and RBC microvesicles. The packaging configuration completed the air drop with no water ingress or physical damage. All units met UK specifications for volume, hemoglobin, and hemolysis. There were no significant differences for any of the variables studied between RBCs in the control box compared to RBCs in Test Boxes 1 and 2 combined over storage. The test proved that the packaging solution and the impact of a maritime air drop as performed in this study, on Day 7 or Day 8 postdonation, did not affect the in vitro quality of RBCs in SAGM over storage for 35 days. © 2017 AABB.

  13. Mortality of riparian box elder from sediment mobilization and extended inundation

    USGS Publications Warehouse

    Friedman, Jonathan M.; Auble, Gregor T.

    1999-01-01

    To explore how high flows limit the streamward extent of riparian vegetation we quantified the effects of sediment mobilization and extended inundation on box elder (Acer negundo) saplings along the cobble-bed Gunnison River in Black Canyon of the Gunnison National Monument, Colorado, USA. We counted and aged box elders in 144 plots of 37.2 m2, and combined a hydraulic model with the hydrologic record to determine the maximum shear stress and number of growing-season days inundated for each plot in each year of the record. We quantified the effects of the two mortality factors by calculating the extreme values survived during the lifetime of trees sampled in 1994 and by recounting box elders in the plots following a high flow in 1995. Both mortality factors can be modeled as threshold functions; box elders are killed either by inundation for more than 85 days during the growing season or by shear stress that exceeds the critical value for mobilization of the underlying sediment particles. Construction of upstream reservoirs in the 1960s and 1970s reduced the proportion of the canyon bottom annually cleared of box elders by high flows. Furthermore, because the dams decreased the magnitude of high flows more than their duration, flow regulation has decreased the importance of sediment mobilization relative to extended inundation. We use the threshold functions and cross-section data to develop a response surface predicting the proportion of the canyon bottom cleared at any combination of flow magnitude and duration. This response surface allows vegetation removal to be incorporated into quantitative multi-objective water management decisions.

  14. Speeding Up the Bilateral Filter: A Joint Acceleration Way.

    PubMed

    Dai, Longquan; Yuan, Mengke; Zhang, Xiaopeng

    2016-06-01

    Computational complexity of the brute-force implementation of the bilateral filter (BF) depends on its filter kernel size. To achieve the constant-time BF whose complexity is irrelevant to the kernel size, many techniques have been proposed, such as 2D box filtering, dimension promotion, and shiftability property. Although each of the above techniques suffers from accuracy and efficiency problems, previous algorithm designers were used to take only one of them to assemble fast implementations due to the hardness of combining them together. Hence, no joint exploitation of these techniques has been proposed to construct a new cutting edge implementation that solves these problems. Jointly employing five techniques: kernel truncation, best N-term approximation as well as previous 2D box filtering, dimension promotion, and shiftability property, we propose a unified framework to transform BF with arbitrary spatial and range kernels into a set of 3D box filters that can be computed in linear time. To the best of our knowledge, our algorithm is the first method that can integrate all these acceleration techniques and, therefore, can draw upon one another's strong point to overcome deficiencies. The strength of our method has been corroborated by several carefully designed experiments. In particular, the filtering accuracy is significantly improved without sacrificing the efficiency at running time.

  15. A semi-automated technique for labeling and counting of apoptosing retinal cells

    PubMed Central

    2014-01-01

    Background Retinal ganglion cell (RGC) loss is one of the earliest and most important cellular changes in glaucoma. The DARC (Detection of Apoptosing Retinal Cells) technology enables in vivo real-time non-invasive imaging of single apoptosing retinal cells in animal models of glaucoma and Alzheimer’s disease. To date, apoptosing RGCs imaged using DARC have been counted manually. This is time-consuming, labour-intensive, vulnerable to bias, and has considerable inter- and intra-operator variability. Results A semi-automated algorithm was developed which enabled automated identification of apoptosing RGCs labeled with fluorescent Annexin-5 on DARC images. Automated analysis included a pre-processing stage involving local-luminance and local-contrast “gain control”, a “blob analysis” step to differentiate between cells, vessels and noise, and a method to exclude non-cell structures using specific combined ‘size’ and ‘aspect’ ratio criteria. Apoptosing retinal cells were counted by 3 masked operators, generating ‘Gold-standard’ mean manual cell counts, and were also counted using the newly developed automated algorithm. Comparison between automated cell counts and the mean manual cell counts on 66 DARC images showed significant correlation between the two methods (Pearson’s correlation coefficient 0.978 (p < 0.001), R Squared = 0.956. The Intraclass correlation coefficient was 0.986 (95% CI 0.977-0.991, p < 0.001), and Cronbach’s alpha measure of consistency = 0.986, confirming excellent correlation and consistency. No significant difference (p = 0.922, 95% CI: −5.53 to 6.10) was detected between the cell counts of the two methods. Conclusions The novel automated algorithm enabled accurate quantification of apoptosing RGCs that is highly comparable to manual counting, and appears to minimise operator-bias, whilst being both fast and reproducible. This may prove to be a valuable method of quantifying apoptosing retinal cells, with particular relevance to translation in the clinic, where a Phase I clinical trial of DARC in glaucoma patients is due to start shortly. PMID:24902592

  16. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  17. An algorithm to count the number of repeated patient data entries with B tree.

    PubMed

    Okada, M; Okada, M

    1985-04-01

    An algorithm to obtain the number of different values that appear a specified number of times in a given data field of a given data file is presented. Basically, a well-known B-tree structure is employed in this study. Some modifications were made to the basic B-tree algorithm. The first step of the modifications is to allow a data item whose values are not necessary distinct from one record to another to be used as a primary key. When a key value is inserted, the number of previous appearances is counted. At the end of all the insertions, the number of key values which are unique in the tree, the number of key values which appear twice, three times, and so forth are obtained. This algorithm is especially powerful for a large size file in disk storage.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weaver, Brian Phillip

    The purpose of this document is to describe the statistical modeling effort for gas concentrations in WIPP storage containers. The concentration (in ppm) of CO 2 in the headspace volume of standard waste box (SWB) 68685 is shown. A Bayesian approach and an adaptive Metropolis-Hastings algorithm were used.

  19. Approach for counting vehicles in congested traffic flow

    NASA Astrophysics Data System (ADS)

    Tan, Xiaojun; Li, Jun; Liu, Wei

    2005-02-01

    More and more image sensors are used in intelligent transportation systems. In practice, occlusion is always a problem when counting vehicles in congested traffic. This paper tries to present an approach to solve the problem. The proposed approach consists of three main procedures. Firstly, a new algorithm of background subtraction is performed. The aim is to segment moving objects from an illumination-variant background. Secondly, object tracking is performed, where the CONDENSATION algorithm is used. This can avoid the problem of matching vehicles in successive frames. Thirdly, an inspecting procedure is executed to count the vehicles. When a bus firstly occludes a car and then the bus moves away a few frames later, the car will appear in the scene. The inspecting procedure should find the "new" car and add it as a tracking object.

  20. Logical account of a terminological tool

    NASA Astrophysics Data System (ADS)

    Bresciani, Paolo

    1991-03-01

    YAK (Yet Another Krapfen) is a hybrid knowledge representation environment following the tradition of KL-ONE and KRYPTON. In its terminological box (TBOX) concepts and roles are described by means of a language called KFL that, even if inspired to FL-, captures a different set of descriptions, especially allowing the formulation of structured roles, that are aimed to be more adequate for the representational goal. KFL results to have a valid and complete calculus for the notion of subsumption, and a tractable algorithm that realize it is available. In the present paper it is shown how the semantics of a sufficiently significative subset of KFL can be described in terms of standard first order logic semantics. It is so possible to develop a notion of 'relation' between KFL and a full first order logic language formulated ad hoc and usable as formalization of an assertional box (ABOX). We then use this notion to justify the use of the terminological classification algorithm of the TBOX of YAK as part of the deduction machinery of the ABOX. In this way, the whole hybrid environment can take real and consistent advantages both from the TBOX and its tractable classification algorithm, and from the ABOX, and its wider expressive capability.

  1. A novel algorithm for thermal image encryption.

    PubMed

    Hussain, Iqtadar; Anees, Amir; Algarni, Abdulmohsen

    2018-04-16

    Thermal images play a vital character at nuclear plants, Power stations, Forensic labs biological research, and petroleum products extraction. Safety of thermal images is very important. Image data has some unique features such as intensity, contrast, homogeneity, entropy and correlation among pixels that is why somehow image encryption is trickier as compare to other encryptions. With conventional image encryption schemes it is normally hard to handle these features. Therefore, cryptographers have paid attention to some attractive properties of the chaotic maps such as randomness and sensitivity to build up novel cryptosystems. That is why, recently proposed image encryption techniques progressively more depends on the application of chaotic maps. This paper proposed an image encryption algorithm based on Chebyshev chaotic map and S8 Symmetric group of permutation based substitution boxes. Primarily, parameters of chaotic Chebyshev map are chosen as a secret key to mystify the primary image. Then, the plaintext image is encrypted by the method generated from the substitution boxes and Chebyshev map. By this process, we can get a cipher text image that is perfectly twisted and dispersed. The outcomes of renowned experiments, key sensitivity tests and statistical analysis confirm that the proposed algorithm offers a safe and efficient approach for real-time image encryption.

  2. Exploring the Thermal Limits of IR-Based Automatic Whale Detection (ETAW)

    DTIC Science & Technology

    2013-09-30

    the northward humpback whale migration, which occurs annually rather close to shore near North Stradbroke Island, Queensland, Australia. Based on the...successive northward humpback whale migrations and collecting concurrent independent (double blind) visual observations (modified cue counting), a... Whale Detection (ETAW) Olaf Boebel P.O. Box 120161 27515 Bremerhaven Germany phone: +49 (471) 4831-1879 fax: +49 (471) 4831-1797 email

  3. An Adaptive Clustering Approach Based on Minimum Travel Route Planning for Wireless Sensor Networks with a Mobile Sink.

    PubMed

    Tang, Jiqiang; Yang, Wu; Zhu, Lingyun; Wang, Dong; Feng, Xin

    2017-04-26

    In recent years, Wireless Sensor Networks with a Mobile Sink (WSN-MS) have been an active research topic due to the widespread use of mobile devices. However, how to get the balance between data delivery latency and energy consumption becomes a key issue of WSN-MS. In this paper, we study the clustering approach by jointly considering the Route planning for mobile sink and Clustering Problem (RCP) for static sensor nodes. We solve the RCP problem by using the minimum travel route clustering approach, which applies the minimum travel route of the mobile sink to guide the clustering process. We formulate the RCP problem as an Integer Non-Linear Programming (INLP) problem to shorten the travel route of the mobile sink under three constraints: the communication hops constraint, the travel route constraint and the loop avoidance constraint. We then propose an Imprecise Induction Algorithm (IIA) based on the property that the solution with a small hop count is more feasible than that with a large hop count. The IIA algorithm includes three processes: initializing travel route planning with a Traveling Salesman Problem (TSP) algorithm, transforming the cluster head to a cluster member and transforming the cluster member to a cluster head. Extensive experimental results show that the IIA algorithm could automatically adjust cluster heads according to the maximum hops parameter and plan a shorter travel route for the mobile sink. Compared with the Shortest Path Tree-based Data-Gathering Algorithm (SPT-DGA), the IIA algorithm has the characteristics of shorter route length, smaller cluster head count and faster convergence rate.

  4. pyblocxs: Bayesian Low-Counts X-ray Spectral Analysis in Sherpa

    NASA Astrophysics Data System (ADS)

    Siemiginowska, A.; Kashyap, V.; Refsdal, B.; van Dyk, D.; Connors, A.; Park, T.

    2011-07-01

    Typical X-ray spectra have low counts and should be modeled using the Poisson distribution. However, χ2 statistic is often applied as an alternative and the data are assumed to follow the Gaussian distribution. A variety of weights to the statistic or a binning of the data is performed to overcome the low counts issues. However, such modifications introduce biases or/and a loss of information. Standard modeling packages such as XSPEC and Sherpa provide the Poisson likelihood and allow computation of rudimentary MCMC chains, but so far do not allow for setting a full Bayesian model. We have implemented a sophisticated Bayesian MCMC-based algorithm to carry out spectral fitting of low counts sources in the Sherpa environment. The code is a Python extension to Sherpa and allows to fit a predefined Sherpa model to high-energy X-ray spectral data and other generic data. We present the algorithm and discuss several issues related to the implementation, including flexible definition of priors and allowing for variations in the calibration information.

  5. Crystal-structure prediction via the Floppy-Box Monte Carlo algorithm: Method and application to hard (non)convex particles

    NASA Astrophysics Data System (ADS)

    de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein

    2012-12-01

    In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.

  6. Strength in Numbers: Using Big Data to Simplify Sentiment Classification.

    PubMed

    Filippas, Apostolos; Lappas, Theodoros

    2017-09-01

    Sentiment classification, the task of assigning a positive or negative label to a text segment, is a key component of mainstream applications such as reputation monitoring, sentiment summarization, and item recommendation. Even though the performance of sentiment classification methods has steadily improved over time, their ever-increasing complexity renders them comprehensible by only a shrinking minority of expert practitioners. For all others, such highly complex methods are black-box predictors that are hard to tune and even harder to justify to decision makers. Motivated by these shortcomings, we introduce BigCounter: a new algorithm for sentiment classification that substitutes algorithmic complexity with Big Data. Our algorithm combines standard data structures with statistical testing to deliver accurate and interpretable predictions. It is also parameter free and suitable for use virtually "out of the box," which makes it appealing for organizations wanting to leverage their troves of unstructured data without incurring the significant expense of creating in-house teams of data scientists. Finally, BigCounter's efficient and parallelizable design makes it applicable to very large data sets. We apply our method on such data sets toward a study on the limits of Big Data for sentiment classification. Our study finds that, after a certain point, predictive performance tends to converge and additional data have little benefit. Our algorithmic design and findings provide the foundations for future research on the data-over-computation paradigm for classification problems.

  7. Adaptive block online learning target tracking based on super pixel segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Yue; Li, Jianzeng

    2018-04-01

    Video target tracking technology under the unremitting exploration of predecessors has made big progress, but there are still lots of problems not solved. This paper proposed a new algorithm of target tracking based on image segmentation technology. Firstly we divide the selected region using simple linear iterative clustering (SLIC) algorithm, after that, we block the area with the improved density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. Each sub-block independently trained classifier and tracked, then the algorithm ignore the failed tracking sub-block while reintegrate the rest of the sub-blocks into tracking box to complete the target tracking. The experimental results show that our algorithm can work effectively under occlusion interference, rotation change, scale change and many other problems in target tracking compared with the current mainstream algorithms.

  8. Automated Detection of Craters in Martian Satellite Imagery Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Norman, C. J.; Paxman, J.; Benedix, G. K.; Tan, T.; Bland, P. A.; Towner, M.

    2018-04-01

    Crater counting is used in determining surface age of planets. We propose improvements to martian Crater Detection Algorithms by implementing an end-to-end detection approach with the possibility of scaling the algorithm planet-wide.

  9. On the Optimum Architecture of the Biologically Inspired Hierarchical Temporal Memory Model Applied to the Hand-Written Digit Recognition

    NASA Astrophysics Data System (ADS)

    Štolc, Svorad; Bajla, Ivan

    2010-01-01

    In the paper we describe basic functions of the Hierarchical Temporal Memory (HTM) network based on a novel biologically inspired model of the large-scale structure of the mammalian neocortex. The focus of this paper is in a systematic exploration of possibilities how to optimize important controlling parameters of the HTM model applied to the classification of hand-written digits from the USPS database. The statistical properties of this database are analyzed using the permutation test which employs a randomization distribution of the training and testing data. Based on a notion of the homogeneous usage of input image pixels, a methodology of the HTM parameter optimization is proposed. In order to study effects of two substantial parameters of the architecture: the patch size and the overlap in more details, we have restricted ourselves to the single-level HTM networks. A novel method for construction of the training sequences by ordering series of the static images is developed. A novel method for estimation of the parameter maxDist based on the box counting method is proposed. The parameter sigma of the inference Gaussian is optimized on the basis of the maximization of the belief distribution entropy. Both optimization algorithms can be equally applied to the multi-level HTM networks as well. The influences of the parameters transitionMemory and requestedGroupCount on the HTM network performance have been explored. Altogether, we have investigated 2736 different HTM network configurations. The obtained classification accuracy results have been benchmarked with the published results of several conventional classifiers.

  10. Phenotiki: an open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants.

    PubMed

    Minervini, Massimo; Giuffrida, Mario V; Perata, Pierdomenico; Tsaftaris, Sotirios A

    2017-04-01

    Phenotyping is important to understand plant biology, but current solutions are costly, not versatile or are difficult to deploy. To solve this problem, we present Phenotiki, an affordable system for plant phenotyping that, relying on off-the-shelf parts, provides an easy to install and maintain platform, offering an out-of-box experience for a well-established phenotyping need: imaging rosette-shaped plants. The accompanying software (with available source code) processes data originating from our device seamlessly and automatically. Our software relies on machine learning to devise robust algorithms, and includes an automated leaf count obtained from 2D images without the need of depth (3D). Our affordable device (~€200) can be deployed in growth chambers or greenhouse to acquire optical 2D images of approximately up to 60 adult Arabidopsis rosettes concurrently. Data from the device are processed remotely on a workstation or via a cloud application (based on CyVerse). In this paper, we present a proof-of-concept validation experiment on top-view images of 24 Arabidopsis plants in a combination of genotypes that has not been compared previously. Phenotypic analysis with respect to morphology, growth, color and leaf count has not been performed comprehensively before now. We confirm the findings of others on some of the extracted traits, showing that we can phenotype at reduced cost. We also perform extensive validations with external measurements and with higher fidelity equipment, and find no loss in statistical accuracy when we use the affordable setting that we propose. Device set-up instructions and analysis software are publicly available ( http://phenotiki.com). © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.

  11. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  12. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  13. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  14. An information dimension of weighted complex networks

    NASA Astrophysics Data System (ADS)

    Wen, Tao; Jiang, Wen

    2018-07-01

    The fractal and self-similarity are important properties in complex networks. Information dimension is a useful dimension for complex networks to reveal these properties. In this paper, an information dimension is proposed for weighted complex networks. Based on the box-covering algorithm for weighted complex networks (BCANw), the proposed method can deal with the weighted complex networks which appear frequently in the real-world, and it can get the influence of the number of nodes in each box on the information dimension. To show the wide scope of information dimension, some applications are illustrated, indicating that the proposed method is effective and feasible.

  15. Current automated 3D cell detection methods are not a suitable replacement for manual stereologic cell counting

    PubMed Central

    Schmitz, Christoph; Eastwood, Brian S.; Tappan, Susan J.; Glaser, Jack R.; Peterson, Daniel A.; Hof, Patrick R.

    2014-01-01

    Stereologic cell counting has had a major impact on the field of neuroscience. A major bottleneck in stereologic cell counting is that the user must manually decide whether or not each cell is counted according to three-dimensional (3D) stereologic counting rules by visual inspection within hundreds of microscopic fields-of-view per investigated brain or brain region. Reliance on visual inspection forces stereologic cell counting to be very labor-intensive and time-consuming, and is the main reason why biased, non-stereologic two-dimensional (2D) “cell counting” approaches have remained in widespread use. We present an evaluation of the performance of modern automated cell detection and segmentation algorithms as a potential alternative to the manual approach in stereologic cell counting. The image data used in this study were 3D microscopic images of thick brain tissue sections prepared with a variety of commonly used nuclear and cytoplasmic stains. The evaluation compared the numbers and locations of cells identified unambiguously and counted exhaustively by an expert observer with those found by three automated 3D cell detection algorithms: nuclei segmentation from the FARSIGHT toolkit, nuclei segmentation by 3D multiple level set methods, and the 3D object counter plug-in for ImageJ. Of these methods, FARSIGHT performed best, with true-positive detection rates between 38 and 99% and false-positive rates from 3.6 to 82%. The results demonstrate that the current automated methods suffer from lower detection rates and higher false-positive rates than are acceptable for obtaining valid estimates of cell numbers. Thus, at present, stereologic cell counting with manual decision for object inclusion according to unbiased stereologic counting rules remains the only adequate method for unbiased cell quantification in histologic tissue sections. PMID:24847213

  16. Maximum likelihood positioning and energy correction for scintillation detectors

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  17. Microbiological testing of raw, boxed beef in the context of hazard analysis critical control point at a high-line-speed abattoir.

    PubMed

    Jericho, K W; Kozub, G C; Gannon, V P; Taylor, C M

    2000-12-01

    The efficacy of cold storage of raw, bagged, boxed beef was assessed microbiologically at a high-line-speed abattoir (270 carcasses per h). At the time of this study, plant management was in the process of creating a hazard analysis critical control point plan for all processes. Aerobic bacteria, coliforms, and type 1 Escherichia coli were enumerated (5 by 5-cm excision samples, hydrophobic grid membrane filter technology) before and after cold storage of this final product produced at six fabrication tables. In addition, the temperature-function integration technique (TFIT) was used to calculate the potential number of generations of E. coli during the first 24 or 48 h of storage of the boxed beef. Based on the temperature histories (total of 60 boxes, resulting from 12 product cuts, five boxes from each of two fabrication tables on each of 6 sampling days, and six types of fabrication tables), TFIT did not predict any growth of E. coli (with or without lag) for the test period. This was verified by E. coli mean log10 values of 0.65 to 0.42 cm2 (P > 0.05) determined by culture before and after the cooling process, respectively. Counts of aerobic bacteria and coliforms were significantly reduced (P < 0.001 and P < 0.05, respectively) during the initial period of the cooling process. There were significant microbiological differences (P < 0.05) between table-cut units.

  18. Detecting free-living steps and walking bouts: validating an algorithm for macro gait analysis.

    PubMed

    Hickey, Aodhán; Del Din, Silvia; Rochester, Lynn; Godfrey, Alan

    2017-01-01

    Research suggests wearables and not instrumented walkways are better suited to quantify gait outcomes in clinic and free-living environments, providing a more comprehensive overview of walking due to continuous monitoring. Numerous validation studies in controlled settings exist, but few have examined the validity of wearables and associated algorithms for identifying and quantifying step counts and walking bouts in uncontrolled (free-living) environments. Studies which have examined free-living step and bout count validity found limited agreement due to variations in walking speed, changing terrain or task. Here we present a gait segmentation algorithm to define free-living step count and walking bouts from an open-source, high-resolution, accelerometer-based wearable (AX3, Axivity). Ten healthy participants (20-33 years) wore two portable gait measurement systems; a wearable accelerometer on the lower-back and a wearable body-mounted camera (GoPro HERO) on the chest, for 1 h on two separate occasions (24 h apart) during free-living activities. Step count and walking bouts were derived for both measurement systems and compared. For all participants during a total of almost 20 h of uncontrolled and unscripted free-living activity data, excellent relative (rho  ⩾  0.941) and absolute (ICC (2,1)   ⩾  0.975) agreement with no presence of bias were identified for step count compared to the camera (gold standard reference). Walking bout identification showed excellent relative (rho  ⩾  0.909) and absolute agreement (ICC (2,1)   ⩾  0.941) but demonstrated significant bias. The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.

  19. A class of Box-Cox transformation models for recurrent event data.

    PubMed

    Sun, Liuquan; Tong, Xingwei; Zhou, Xian

    2011-04-01

    In this article, we propose a class of Box-Cox transformation models for recurrent event data, which includes the proportional means models as special cases. The new model offers great flexibility in formulating the effects of covariates on the mean functions of counting processes while leaving the stochastic structure completely unspecified. For the inference on the proposed models, we apply a profile pseudo-partial likelihood method to estimate the model parameters via estimating equation approaches and establish large sample properties of the estimators and examine its performance in moderate-sized samples through simulation studies. In addition, some graphical and numerical procedures are presented for model checking. An example of application on a set of multiple-infection data taken from a clinic study on chronic granulomatous disease (CGD) is also illustrated.

  20. Analysis of counting data: Development of the SATLAS Python package

    NASA Astrophysics Data System (ADS)

    Gins, W.; de Groote, R. P.; Bissell, M. L.; Granados Buitrago, C.; Ferrer, R.; Lynch, K. M.; Neyens, G.; Sels, S.

    2018-01-01

    For the analysis of low-statistics counting experiments, a traditional nonlinear least squares minimization routine may not always provide correct parameter and uncertainty estimates due to the assumptions inherent in the algorithm(s). In response to this, a user-friendly Python package (SATLAS) was written to provide an easy interface between the data and a variety of minimization algorithms which are suited for analyzinglow, as well as high, statistics data. The advantage of this package is that it allows the user to define their own model function and then compare different minimization routines to determine the optimal parameter values and their respective (correlated) errors. Experimental validation of the different approaches in the package is done through analysis of hyperfine structure data of 203Fr gathered by the CRIS experiment at ISOLDE, CERN.

  1. Automatic choroid cells segmentation and counting in fluorescence microscopic image

    NASA Astrophysics Data System (ADS)

    Fei, Jianjun; Zhu, Weifang; Shi, Fei; Xiang, Dehui; Lin, Xiao; Yang, Lei; Chen, Xinjian

    2016-03-01

    In this paper, we proposed a method to automatically segment and count the rhesus choroid-retinal vascular endothelial cells (RF/6A) in fluorescence microscopic images which is based on shape classification, bottleneck detection and accelerated Dijkstra algorithm. The proposed method includes four main steps. First, a thresholding filter and morphological operations are applied to reduce the noise. Second, a shape classifier is used to decide whether a connected component is needed to be segmented. In this step, the AdaBoost classifier is applied with a set of shape features. Third, the bottleneck positions are found based on the contours of the connected components. Finally, the cells segmentation and counting are completed based on the accelerated Dijkstra algorithm with the gradient information between the bottleneck positions. The results show the feasibility and efficiency of the proposed method.

  2. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kagie, Matthew J.; Lanterman, Aaron D.

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  3. Evaluation and analysis of SEASAT-A Scanning Multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    An evaluation of the versions of the SEASAT-A SMMR antenna pattern correction (APC) algorithm is presented. Two efforts are focused upon in the APC evaluation: the intercomparison of the interim, box, cross, and nominal APC modes; and the development of software to facilitate the creation of matched spacecraft and surface truth data sets which are located together in time and space. The problems discovered in earlier versions of the APC, now corrected, are discussed.

  4. Continued Evaluation of Gear Condition Indicator Performance on Rotorcraft Fleet

    NASA Technical Reports Server (NTRS)

    Delgado, Irebert R.; Dempsey, Paula J.; Antolick, Lance J.; Wade, Daniel R.

    2013-01-01

    This paper details analyses of condition indicator performance for the helicopter nose gearbox within the U.S. Army's Condition-Based Maintenance Program. Ten nose gearbox data sets underwent two specific analyses. A mean condition indicator level analysis was performed where condition indicator performance was based on a 'batting average' measured before and after part replacement. Two specific condition indicators, Diagnostic Algorithm 1 and Sideband Index, were found to perform well for the data sets studied. A condition indicator versus gear wear analysis was also performed, where gear wear photographs and descriptions from Army tear-down analyses were categorized based on ANSI/AGMA 1010-E95 standards. Seven nose gearbox data sets were analyzed and correlated with condition indicators Diagnostic Algorithm 1 and Sideband Index. Both were found to be most responsive to gear wear cases of micropitting and spalling. Input pinion nose gear box condition indicators were found to be more responsive to part replacement during overhaul than their corresponding output gear nose gear box condition indicators.

  5. Neutron capture cross-section measurements for 238U between 0.4 and 1.4 MeV

    NASA Astrophysics Data System (ADS)

    Krishichayan, Fnu; Finch, S. W.; Howell, C. R.; Tonchev, A. P.; Tornow, W.

    2017-09-01

    Neutron-induced radiative-capture cross-section data of 238U are crucial for fundamental nuclear physics as well as for Stewardship Science, for advanced-fuel-cycle calculations, and for nuclear astrophysics. Based on different techniques, there are a large number of 238U(n, γ) 239U cross-section data available in the literature. However, there is a lack of systematic and consistent measurements in the 0.1 to 3.0 MeV energy range. The goal of the neutron-capture project at TUNL is to provide accurate 238U(n, γ) 239U cross-section data in this energy range. The 238U samples, sandwiched between gold foils of the same size, were irradiated for 8-14 hours with monoenergetic neutrons. To avoid any contribution from thermal neutrons, the 238U and 197Au targets were placed inside of a thin-walled pill-box made of 238U. Finally, the whole pill-box was wrapped in a gold foil as well. After irradiation, the samples were gamma-counted at the TUNL's low-background counting facility using high-efficient HPGe detectors. The 197Au monitor foils were used to calculate the neutron flux. The experimental technique and 238U(n, γ) 239U cross-section results at 6 energies will be discussed during the meeting.

  6. Fractal analysis as a potential tool for surface morphology of thin films

    NASA Astrophysics Data System (ADS)

    Soumya, S.; Swapna, M. S.; Raj, Vimal; Mahadevan Pillai, V. P.; Sankararaman, S.

    2017-12-01

    Fractal geometry developed by Mandelbrot has emerged as a potential tool for analyzing complex systems in the diversified fields of science, social science, and technology. Self-similar objects having the same details in different scales are referred to as fractals and are analyzed using the mathematics of non-Euclidean geometry. The present work is an attempt to correlate fractal dimension for surface characterization by Atomic Force Microscopy (AFM). Taking the AFM images of zinc sulphide (ZnS) thin films prepared by pulsed laser deposition (PLD) technique, under different annealing temperatures, the effect of annealing temperature and surface roughness on fractal dimension is studied. The annealing temperature and surface roughness show a strong correlation with fractal dimension. From the regression equation set, the surface roughness at a given annealing temperature can be calculated from the fractal dimension. The AFM images are processed using Photoshop and fractal dimension is calculated by box-counting method. The fractal dimension decreases from 1.986 to 1.633 while the surface roughness increases from 1.110 to 3.427, for a change of annealing temperature 30 ° C to 600 ° C. The images are also analyzed by power spectrum method to find the fractal dimension. The study reveals that the box-counting method gives better results compared to the power spectrum method.

  7. Geometry and scaling laws of excursion and iso-sets of enstrophy and dissipation in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Elsas, José Hugo; Szalay, Alexander S.; Meneveau, Charles

    2018-04-01

    Motivated by interest in the geometry of high intensity events of turbulent flows, we examine the spatial correlation functions of sets where turbulent events are particularly intense. These sets are defined using indicator functions on excursion and iso-value sets. Their geometric scaling properties are analysed by examining possible power-law decay of their radial correlation function. We apply the analysis to enstrophy, dissipation and velocity gradient invariants Q and R and their joint spatial distributions, using data from a direct numerical simulation of isotropic turbulence at Reλ ≈ 430. While no fractal scaling is found in the inertial range using box-counting in the finite Reynolds number flow considered here, power-law scaling in the inertial range is found in the radial correlation functions. Thus, a geometric characterisation in terms of these sets' correlation dimension is possible. Strong dependence on the enstrophy and dissipation threshold is found, consistent with multifractal behaviour. Nevertheless, the lack of scaling of the box-counting analysis precludes direct quantitative comparisons with earlier work based on multifractal formalism. Surprising trends, such as a lower correlation dimension for strong dissipation events compared to strong enstrophy events, are observed and interpreted in terms of spatial coherence of vortices in the flow.

  8. Automatic Detection of Pitching and Throwing Events in Baseball With Inertial Measurement Sensors.

    PubMed

    Murray, Nick B; Black, Georgia M; Whiteley, Rod J; Gahan, Peter; Cole, Michael H; Utting, Andy; Gabbett, Tim J

    2017-04-01

    Throwing loads are known to be closely related to injury risk. However, for logistic reasons, typically only pitchers have their throws counted, and then only during innings. Accordingly, all other throws made are not counted, so estimates of throws made by players may be inaccurately recorded and underreported. A potential solution to this is the use of wearable microtechnology to automatically detect, quantify, and report pitch counts in baseball. This study investigated the accuracy of detection of baseball pitching and throwing in both practice and competition using a commercially available wearable microtechnology unit. Seventeen elite youth baseball players (mean ± SD age 16.5 ± 0.8 y, height 184.1 ± 5.5 cm, mass 78.3 ± 7.7 kg) participated in this study. Participants performed pitching, fielding, and throwing during practice and competition while wearing a microtechnology unit. Sensitivity and specificity of a pitching and throwing algorithm were determined by comparing automatic measures (ie, microtechnology unit) with direct measures (ie, manually recorded pitching counts). The pitching and throwing algorithm was sensitive during both practice (100%) and competition (100%). Specificity was poorer during both practice (79.8%) and competition (74.4%). These findings demonstrate that the microtechnology unit is sensitive to detect pitching and throwing events, but further development of the pitching algorithm is required to accurately and consistently quantify throwing loads using microtechnology.

  9. Validity of Using Tri-Axial Accelerometers to Measure Human Movement – Part II: Step Counts at a Wide Range of Gait Velocities

    PubMed Central

    Fortune, Emma; Lugade, Vipul; Morrow, Melissa; Kaufman, Kenton

    2014-01-01

    A subject-specific step counting method with a high accuracy level at all walking speeds is needed to assess the functional level of impaired patients. The study aim was to validate step counts and cadence calculations from acceleration data by comparison to video data during dynamic activity. Custom-built activity monitors, each containing one tri-axial accelerometer, were placed on the ankles, thigh, and waist of 11 healthy adults. ICC values were greater than 0.98 for video inter-rater reliability of all step counts. The activity monitoring system (AMS) algorithm demonstrated a median (interquartile range; IQR) agreement of 92% (8%) with visual observations during walking/jogging trials at gait velocities ranging from 0.1 m/s to 4.8 m/s, while FitBits (ankle and waist), and a Nike Fuelband (wrist) demonstrated agreements of 92% (36%), 93% (22%), and 33% (35%), respectively. The algorithm results demonstrated high median (IQR) step detection sensitivity (95% (2%)), positive predictive value (PPV) (99% (1%)), and agreement (97% (3%)) during a laboratory-based simulated free-living protocol. The algorithm also showed high median (IQR) sensitivity, PPV, and agreement identifying walking steps (91% (5%), 98% (4%), and 96% (5%)), jogging steps (97% (6%), 100% (1%), and 95% (6%)), and less than 3% mean error in cadence calculations. PMID:24656871

  10. Validity of using tri-axial accelerometers to measure human movement - Part II: Step counts at a wide range of gait velocities.

    PubMed

    Fortune, Emma; Lugade, Vipul; Morrow, Melissa; Kaufman, Kenton

    2014-06-01

    A subject-specific step counting method with a high accuracy level at all walking speeds is needed to assess the functional level of impaired patients. The study aim was to validate step counts and cadence calculations from acceleration data by comparison to video data during dynamic activity. Custom-built activity monitors, each containing one tri-axial accelerometer, were placed on the ankles, thigh, and waist of 11 healthy adults. ICC values were greater than 0.98 for video inter-rater reliability of all step counts. The activity monitoring system (AMS) algorithm demonstrated a median (interquartile range; IQR) agreement of 92% (8%) with visual observations during walking/jogging trials at gait velocities ranging from 0.1 to 4.8m/s, while FitBits (ankle and waist), and a Nike Fuelband (wrist) demonstrated agreements of 92% (36%), 93% (22%), and 33% (35%), respectively. The algorithm results demonstrated high median (IQR) step detection sensitivity (95% (2%)), positive predictive value (PPV) (99% (1%)), and agreement (97% (3%)) during a laboratory-based simulated free-living protocol. The algorithm also showed high median (IQR) sensitivity, PPV, and agreement identifying walking steps (91% (5%), 98% (4%), and 96% (5%)), jogging steps (97% (6%), 100% (1%), and 95% (6%)), and less than 3% mean error in cadence calculations. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Impact of double counting and transfer bias on estimated rates and outcomes of acute myocardial infarction.

    PubMed

    Westfall, J M; McGloin, J

    2001-05-01

    Ischemic heart disease is the leading cause of death in the United States. Recent studies report inconsistent findings on the changes in the incidence of hospitalizations for ischemic heart disease. These reports have relied primarily on hospital discharge data. Preliminary data suggest that a significant percentage of patients suffering acute myocardial infarction (MI) in rural communities are transferred to urban centers for care. Patients transferred to a second hospital may be counted twice for one episode of ischemic heart disease. To describe the impact of double counting and transfer bias on the estimation of incidence rates and outcomes of ischemic heart disease, specifically acute MI, in the United States. Analysis of state hospital discharge data from Kansas, Colorado (State Inpatient Database [SID]), Nebraska, Arizona, New Jersey, Michigan, Pennsylvania, and Illinois (SID) for the years 1995 to 1997. A matching algorithm was developed for hospital discharges to determine patients counted twice for one episode of ischemic heart disease. Validation of our matching algorithm. Patients reported to have suffered ischemic heart disease (ICD9 codes 410-414, 786.5). Number of patients counted twice for one episode of acute MI. It is estimated that double count rates range from 10% to 15% for all states and increased over the 3 years. Moderate sized rural counties had the highest estimated double count rates at 15% to 20% with a few counties having estimated double count rates a high as 35% to 50%. Older patients and females were less likely to be double counted (P <0.05). Double counting patients has resulted in a significant overestimation in the incidence rate for hospitalization for acute MI. Correction of this double counting reveals a significantly lower incidence rate and a higher in-hospital mortality rate for acute MI. Transferred patients differ significantly from nontransferred patients, introducing significant bias into MI outcome studies. Double counting and transfer bias should be considered when conducting and interpreting research on ischemic heart disease, particularly in rural regions.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert G.

    This report describes how the intelligent load control (ILC) algorithm can be implemented to achieve peak demand reduction while minimizing impacts on occupant comfort. The algorithm was designed to minimize the additional sensors and minimum configuration requirements to enable a scalable and cost-effective implementation for both large and small-/medium-sized commercial buildings. The ILC algorithm uses an analytic hierarchy process (AHP) to dynamically prioritize the available curtailable loads based on both quantitative (deviation of zone conditions from set point) and qualitative rules (types of zone). Although the ILC algorithm described in this report was highly tailored to work with rooftop units,more » it can be generalized for application to other building loads such as variable-air-volume (VAV) boxes and lighting systems.« less

  13. Efficient volume computation for three-dimensional hexahedral cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dukowicz, J.K.

    1988-02-01

    Currently, algorithms for computing the volume of hexahedral cells with ''ruled'' surfaces require a minimum of 122 FLOPs (floating point operations) per cell. A new algorithm is described which reduces the operation count to 57 FLOPs per cell. copyright 1988 Academic Press, Inc.

  14. Research of intelligent bus coin box

    NASA Astrophysics Data System (ADS)

    Xin, Shihao

    2017-03-01

    In the energy-saving emission reduction of the social context, in response to low-carbon travel, buses become the majority of people choose. We have designed this sorting machine for the present situation that the bus company has received a large amount of mixed zero coins and employed a large amount of manpower to sort out and lower the efficiency. Its function is to separate the coins and notes mixed, and the coins sort storage, the display shows the value of the received coins, so that the whole mechanized inventory classification, reduce the cost of clearing up and improve the efficiency of zero cash recycling, use Simple mechanical principles for classification, to be efficient, accurate and practical. Really meet the current city bus companies, commerce and banking and other industries in order to zero notes, zero coins in the actual demand. The size and specification of this machine are designed according to the size of the bus coin box. It is suitable for almost all buses. It can be installed in the coin box directly, real-time sorting and real-time counting. The difficulty of clearing change.

  15. House microbiotas as sources of lactic acid bacteria and yeasts in traditional Italian sourdoughs.

    PubMed

    Minervini, Fabio; Lattanzi, Anna; De Angelis, Maria; Celano, Giuseppe; Gobbetti, Marco

    2015-12-01

    This study aimed at understanding the extent of contamination by lactic acid bacteria (LAB) and yeasts from the house microbiotas during sourdough back-slopping. Besides sourdoughs, wall, air, storage box, dough mixer and flour of four bakeries were analyzed. Based on plate counts, LAB and yeasts dominated the house microbiota. Based on high throughput sequencing of the 16S rRNA genes, flour harbored the highest number of Firmicutes, but only few of them adapted to storage box, dough mixer and sourdough. Lactobacillus sanfranciscensis showed the highest abundance in dough mixer and sourdoughs. Lactobacillus plantarum persisted only in storage box, dough mixer and sourdough of two bakeries. Weissella cibaria also showed higher adaptability in sourdough than in bakery equipment, suggesting that flour is the main origin of this species. Based on 18S rRNA data, Saccharomyces cerevisiae was the dominant yeast in house and sourdough microbiotas, excepted one bakery dominated by Kazachstania exigua. The results of this study suggest that the dominant species of sourdough LAB and yeasts dominated also the house microbiota. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy.

    PubMed

    Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine

    2015-10-27

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.

  17. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy

    PubMed Central

    Daskalakis, Constantine

    2015-01-01

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient’s microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.1 This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique. PMID:26554744

  18. Interfacing External Quantum Devices to a Universal Quantum Computer

    PubMed Central

    Lagana, Antonio A.; Lohe, Max A.; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. PMID:22216276

  19. NARMAX model identification of a palm oil biodiesel engine using multi-objective optimization differential evolution

    NASA Astrophysics Data System (ADS)

    Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin

    2017-09-01

    This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.

  20. Interfacing external quantum devices to a universal quantum computer.

    PubMed

    Lagana, Antonio A; Lohe, Max A; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. © 2011 Lagana et al.

  1. An Adaptive Clustering Approach Based on Minimum Travel Route Planning for Wireless Sensor Networks with a Mobile Sink

    PubMed Central

    Tang, Jiqiang; Yang, Wu; Zhu, Lingyun; Wang, Dong; Feng, Xin

    2017-01-01

    In recent years, Wireless Sensor Networks with a Mobile Sink (WSN-MS) have been an active research topic due to the widespread use of mobile devices. However, how to get the balance between data delivery latency and energy consumption becomes a key issue of WSN-MS. In this paper, we study the clustering approach by jointly considering the Route planning for mobile sink and Clustering Problem (RCP) for static sensor nodes. We solve the RCP problem by using the minimum travel route clustering approach, which applies the minimum travel route of the mobile sink to guide the clustering process. We formulate the RCP problem as an Integer Non-Linear Programming (INLP) problem to shorten the travel route of the mobile sink under three constraints: the communication hops constraint, the travel route constraint and the loop avoidance constraint. We then propose an Imprecise Induction Algorithm (IIA) based on the property that the solution with a small hop count is more feasible than that with a large hop count. The IIA algorithm includes three processes: initializing travel route planning with a Traveling Salesman Problem (TSP) algorithm, transforming the cluster head to a cluster member and transforming the cluster member to a cluster head. Extensive experimental results show that the IIA algorithm could automatically adjust cluster heads according to the maximum hops parameter and plan a shorter travel route for the mobile sink. Compared with the Shortest Path Tree-based Data-Gathering Algorithm (SPT-DGA), the IIA algorithm has the characteristics of shorter route length, smaller cluster head count and faster convergence rate. PMID:28445434

  2. [A research in speech endpoint detection based on boxes-coupling generalization dimension].

    PubMed

    Wang, Zimei; Yang, Cuirong; Wu, Wei; Fan, Yingle

    2008-06-01

    In this paper, a new calculating method of generalized dimension, based on boxes-coupling principle, is proposed to overcome the edge effects and to improve the capability of the speech endpoint detection which is based on the original calculating method of generalized dimension. This new method has been applied to speech endpoint detection. Firstly, the length of overlapping border was determined, and through calculating the generalized dimension by covering the speech signal with overlapped boxes, three-dimension feature vectors including the box dimension, the information dimension and the correlation dimension were obtained. Secondly, in the light of the relation between feature distance and similarity degree, feature extraction was conducted by use of common distance. Lastly, bi-threshold method was used to classify the speech signals. The results of experiment indicated that, by comparison with the original generalized dimension (OGD) and the spectral entropy (SE) algorithm, the proposed method is more robust and effective for detecting the speech signals which contain different kinds of noise in different signal noise ratio (SNR), especially in low SNR.

  3. On the orbits that generate the X-shape in the Milky Way bulge

    NASA Astrophysics Data System (ADS)

    Abbott, Caleb G.; Valluri, Monica; Shen, Juntai; Debattista, Victor P.

    2017-09-01

    The Milky Way (MW) bulge shows a boxy/peanut or X-shaped bulge (hereafter BP/X) when viewed in infrared or microwave bands. We examine orbits in an N-body model of a barred disc galaxy that is scaled to match the kinematics of the MW bulge. We generate maps of projected stellar surface density, unsharp masked images, 3D excess-mass distributions (showing mass outside ellipsoids), line-of-sight number count distributions, and 2D line-of-sight kinematics for the simulation as well as co-added orbit families, in order to identify the orbits primarily responsible for the BP/X shape. We estimate that between 19 and 23 per cent of the mass of the bar in this model is associated with the BP/X shape and that the majority of bar orbits contribute to this shape that is clearly seen in projected surface density maps and 3D excess mass for non-resonant box orbits, 'banana' orbits, 'fish/pretzel' orbits and 'brezel' orbits. Although only the latter two families (comprising 7.5 per cent of the total mass) show a distinct X-shape in unsharp masked images, we find that nearly all bar orbit families contribute some mass to the 3D BP/X-shape. All co-added orbit families show a bifurcation in stellar number count distribution with distance that resembles the bifurcation observed in red clump stars in the MW. However, only the box orbit family shows an increasing separation of peaks with increasing galactic latitude |b|, similar to that observed. Our analysis suggests that no single orbit family fully explains all the observed features associated with the MW's BP/X-shaped bulge, but collectively the non-resonant boxes and various resonant boxlet orbits contribute at different distances from the centre to produce this feature. We propose that since box orbits (which are the dominant population in bars) have three incommensurable orbital fundamental frequencies, their 3D shapes are highly flexible and, like Lissajous figures, this family of orbits is most easily able to adapt to evolution in the shape of the underlying potential.

  4. Auto-Relevancy Baseline: A Hybrid System Without Human Feedback

    DTIC Science & Technology

    2010-11-01

    classical Bayes algorithm upon the pseudo-hybridization of SemanticA and Latent Semantic IndexingBC systems should smooth out historically high yet...black box emulated a machine learning topic expert. Similar to some Web methods, the initial topics within the legal document were expanded upon

  5. Acoustic change detection algorithm using an FM radio

    NASA Astrophysics Data System (ADS)

    Goldman, Geoffrey H.; Wolfe, Owen

    2012-06-01

    The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.

  6. Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2010-01-01

    A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.

  7. Assessment of disintegrant efficacy with fractal dimensions from real-time MRI.

    PubMed

    Quodbach, Julian; Moussavi, Amir; Tammer, Roland; Frahm, Jens; Kleinebudde, Peter

    2014-11-20

    An efficient disintegrant is capable of breaking up a tablet in the smallest possible particles in the shortest time. Until now, comparative data on the efficacy of different disintegrants is based on dissolution studies or the disintegration time. Extending these approaches, this study introduces a method, which defines the evolution of fractal dimensions of tablets as surrogate parameter for the available surface area. Fractal dimensions are a measure for the tortuosity of a line, in this case the upper surface of a disintegrating tablet. High-resolution real-time MRI was used to record videos of disintegrating tablets. The acquired video images were processed to depict the upper surface of the tablets and a box-counting algorithm was used to estimate the fractal dimensions. The influence of six different disintegrants, of different relative tablet density, and increasing disintegrant concentration was investigated to evaluate the performance of the novel method. Changing relative densities hardly affect the progression of fractal dimensions, whereas an increase in disintegrant concentration causes increasing fractal dimensions during disintegration, which are also reached quicker. Different disintegrants display only minor differences in the maximal fractal dimension, yet the kinetic in which the maximum is reached allows a differentiation and classification of disintegrants. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. A spatiotemporal characterization method for the dynamic cytoskeleton.

    PubMed

    Alhussein, Ghada; Shanti, Aya; Farhat, Ilyas A H; Timraz, Sara B H; Alwahab, Noaf S A; Pearson, Yanthe E; Martin, Matthew N; Christoforou, Nicolas; Teo, Jeremy C M

    2016-05-01

    The significant gap between quantitative and qualitative understanding of cytoskeletal function is a pressing problem; microscopy and labeling techniques have improved qualitative investigations of localized cytoskeleton behavior, whereas quantitative analyses of whole cell cytoskeleton networks remain challenging. Here we present a method that accurately quantifies cytoskeleton dynamics. Our approach digitally subdivides cytoskeleton images using interrogation windows, within which box-counting is used to infer a fractal dimension (Df ) to characterize spatial arrangement, and gray value intensity (GVI) to determine actin density. A partitioning algorithm further obtains cytoskeleton characteristics from the perinuclear, cytosolic, and periphery cellular regions. We validated our measurement approach on Cytochalasin-treated cells using transgenically modified dermal fibroblast cells expressing fluorescent actin cytoskeletons. This method differentiates between normal and chemically disrupted actin networks, and quantifies rates of cytoskeletal degradation. Furthermore, GVI distributions were found to be inversely proportional to Df , having several biophysical implications for cytoskeleton formation/degradation. We additionally demonstrated detection sensitivity of differences in Df and GVI for cells seeded on substrates with varying degrees of stiffness, and coated with different attachment proteins. This general approach can be further implemented to gain insights on dynamic growth, disruption, and structure of the cytoskeleton (and other complex biological morphology) due to biological, chemical, or physical stimuli. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Investigation of multiple mortality events in eastern box turtles (Terrapene carolina carolina).

    PubMed

    Adamovicz, Laura; Allender, Matthew C; Archer, Grace; Rzadkowska, Marta; Boers, Kayla; Phillips, Chris; Driskell, Elizabeth; Kinsel, Michael J; Chu, Caroline

    2018-01-01

    Wildlife mortality investigations are important for conservation, food safety, and public health; but they are infrequently reported for cryptic chelonian species. Eastern box turtles (Terrapene carolina carolina) are declining due to anthropogenic factors and disease, and while mortality investigations have been reported for captive and translocated individuals, few descriptions exist for free-living populations. We report the results of four natural mortality event investigations conducted during routine health surveillance of three Illinois box turtle populations in 2011, 2013, 2014, and 2015. In April 2011, over 50 box turtles were found dead and a polymicrobial necrotizing bacterial infection was diagnosed in five survivors using histopathology and aerobic/anaerobic culture. This represents the first reported occurrence of necrotizing bacterial infection in box turtles. In August 2013, paired histopathology and qPCR ranavirus detection in nine turtles was significantly associated with occupation of moist microhabitats, identification of oral plaques and nasal discharge on physical exam, and increases in the heterophil count and heterophil to lymphocyte ratio (p < 0.05). In July 2014 and 2015, ranavirus outbreaks reoccurred within a 0.2km radius of highly-disturbed habitat containing ephemeral ponds used by amphibians for breeding. qPCR ranavirus detection in five individuals each year was significantly associated with use of moist microhabitats (p < 0.05). Detection of single and co-pathogens (Terrapene herpesvirus 1, adenovirus, and Mycoplasma sp.) was common before, during, and after mortality events, but improved sample size would be necessary to determine the impacts of these pathogens on the occurrence and outcome of mortality events. This study provides novel information about the causes and predictors of natural box turtle mortality events. Continued investigation of health, disease, and death in free-living box turtles will improve baseline knowledge of morbidity and mortality, identify threats to survival, and promote the formation of effective conservation strategies.

  10. Investigation of multiple mortality events in eastern box turtles (Terrapene carolina carolina)

    PubMed Central

    Allender, Matthew C.; Archer, Grace; Rzadkowska, Marta; Boers, Kayla; Phillips, Chris; Driskell, Elizabeth; Kinsel, Michael J.; Chu, Caroline

    2018-01-01

    Wildlife mortality investigations are important for conservation, food safety, and public health; but they are infrequently reported for cryptic chelonian species. Eastern box turtles (Terrapene carolina carolina) are declining due to anthropogenic factors and disease, and while mortality investigations have been reported for captive and translocated individuals, few descriptions exist for free-living populations. We report the results of four natural mortality event investigations conducted during routine health surveillance of three Illinois box turtle populations in 2011, 2013, 2014, and 2015. In April 2011, over 50 box turtles were found dead and a polymicrobial necrotizing bacterial infection was diagnosed in five survivors using histopathology and aerobic/anaerobic culture. This represents the first reported occurrence of necrotizing bacterial infection in box turtles. In August 2013, paired histopathology and qPCR ranavirus detection in nine turtles was significantly associated with occupation of moist microhabitats, identification of oral plaques and nasal discharge on physical exam, and increases in the heterophil count and heterophil to lymphocyte ratio (p < 0.05). In July 2014 and 2015, ranavirus outbreaks reoccurred within a 0.2km radius of highly-disturbed habitat containing ephemeral ponds used by amphibians for breeding. qPCR ranavirus detection in five individuals each year was significantly associated with use of moist microhabitats (p < 0.05). Detection of single and co-pathogens (Terrapene herpesvirus 1, adenovirus, and Mycoplasma sp.) was common before, during, and after mortality events, but improved sample size would be necessary to determine the impacts of these pathogens on the occurrence and outcome of mortality events. This study provides novel information about the causes and predictors of natural box turtle mortality events. Continued investigation of health, disease, and death in free-living box turtles will improve baseline knowledge of morbidity and mortality, identify threats to survival, and promote the formation of effective conservation strategies. PMID:29621347

  11. Patient-Centered Mobile Health Data Management Solution for the German Health Care System (The DataBox Project).

    PubMed

    Brinker, Titus Josef; Rudolph, Stefanie; Richter, Daniela; von Kalle, Christof

    2018-05-11

    This article describes the DataBox project which offers a perspective of a new health data management solution in Germany. DataBox was initially conceptualized as a repository of individual lung cancer patient data (structured and unstructured). The patient is the owner of the data and is able to share his or her data with different stakeholders. Data is transferred, displayed, and stored online, but not archived. In the long run, the project aims at replacing the conventional method of paper- and storage-device-based handling of data for all patients in Germany, leading to better organization and availability of data which reduces duplicate diagnostic procedures, treatment errors, and enables the training as well as usage of artificial intelligence algorithms on large datasets. ©Titus Josef Brinker, Stefanie Rudolph, Daniela Richter, Christof von Kalle. Originally published in JMIR Cancer (http://cancer.jmir.org), 11.05.2018.

  12. Personalized Physical Activity Coaching: A Machine Learning Approach

    PubMed Central

    Dijkhuis, Talko B.; van Ittersum, Miriam W.; Velthuijsen, Hugo

    2018-01-01

    Living a sedentary lifestyle is one of the major causes of numerous health problems. To encourage employees to lead a less sedentary life, the Hanze University started a health promotion program. One of the interventions in the program was the use of an activity tracker to record participants' daily step count. The daily step count served as input for a fortnightly coaching session. In this paper, we investigate the possibility of automating part of the coaching procedure on physical activity by providing personalized feedback throughout the day on a participant’s progress in achieving a personal step goal. The gathered step count data was used to train eight different machine learning algorithms to make hourly estimations of the probability of achieving a personalized, daily steps threshold. In 80% of the individual cases, the Random Forest algorithm was the best performing algorithm (mean accuracy = 0.93, range = 0.88–0.99, and mean F1-score = 0.90, range = 0.87–0.94). To demonstrate the practical usefulness of these models, we developed a proof-of-concept Web application that provides personalized feedback about whether a participant is expected to reach his or her daily threshold. We argue that the use of machine learning could become an invaluable asset in the process of automated personalized coaching. The individualized algorithms allow for predicting physical activity during the day and provides the possibility to intervene in time. PMID:29463052

  13. Performance Analysis of Continuous Black-Box Optimization Algorithms via Footprints in Instance Space.

    PubMed

    Muñoz, Mario A; Smith-Miles, Kate A

    2017-01-01

    This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.

  14. CSciBox: An Intelligent Assistant for Dating Ice and Sediment Cores

    NASA Astrophysics Data System (ADS)

    Finlinson, K.; Bradley, E.; White, J. W. C.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; Jones, T. R.; Lindsay, C. M.; Israelsen, B.

    2015-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmental archives. It incorporates a number of data-processing and visualization facilities, ranging from simple interpolation to reservoir-age correction and 14C calibration via the Calib algorithm, as well as a number of firn and ice-flow models. It employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form, and offers the user access to those data and computational elements via a modern graphical user interface (GUI). In the case of truly large data or computations, CSciBox is parallelizable across modern multi-core processors, or clusters, or even the cloud. The code is open source and freely available on github, as are one-click installers for various versions of Windows and Mac OSX. The system's architecture allows users to incorporate their own software in the form of computational components that can be built smoothly into CSciBox workflows, taking advantage of CSciBox's GUI, data importing facilities, and plotting capabilities. To date, BACON and StratiCounter have been integrated into CSciBox as embedded components. The user can manipulate and compose all of these tools and facilities as she sees fit. Alternatively, she can employ CSciBox's automated reasoning engine, which uses artificial intelligence techniques to explore the gamut of age models and cross-dating scenarios automatically. The automated reasoning engine captures the knowledge of expert geoscientists, and can output a description of its reasoning.

  15. Real-time bacterial microcolony counting using on-chip microscopy

    NASA Astrophysics Data System (ADS)

    Jung, Jae Hee; Lee, Jung Eun

    2016-02-01

    Observing microbial colonies is the standard method for determining the microbe titer and investigating the behaviors of microbes. Here, we report an automated, real-time bacterial microcolony-counting system implemented on a wide field-of-view (FOV), on-chip microscopy platform, termed ePetri. Using sub-pixel sweeping microscopy (SPSM) with a super-resolution algorithm, this system offers the ability to dynamically track individual bacterial microcolonies over a wide FOV of 5.7 mm × 4.3 mm without requiring a moving stage or lens. As a demonstration, we obtained high-resolution time-series images of S. epidermidis at 20-min intervals. We implemented an image-processing algorithm to analyze the spatiotemporal distribution of microcolonies, the development of which could be observed from a single bacterial cell. Test bacterial colonies with a minimum diameter of 20 μm could be enumerated within 6 h. We showed that our approach not only provides results that are comparable to conventional colony-counting assays but also can be used to monitor the dynamics of colony formation and growth. This microcolony-counting system using on-chip microscopy represents a new platform that substantially reduces the detection time for bacterial colony counting. It uses chip-scale image acquisition and is a simple and compact solution for the automation of colony-counting assays and microbe behavior analysis with applications in antibacterial drug discovery.

  16. Real-time bacterial microcolony counting using on-chip microscopy

    PubMed Central

    Jung, Jae Hee; Lee, Jung Eun

    2016-01-01

    Observing microbial colonies is the standard method for determining the microbe titer and investigating the behaviors of microbes. Here, we report an automated, real-time bacterial microcolony-counting system implemented on a wide field-of-view (FOV), on-chip microscopy platform, termed ePetri. Using sub-pixel sweeping microscopy (SPSM) with a super-resolution algorithm, this system offers the ability to dynamically track individual bacterial microcolonies over a wide FOV of 5.7 mm × 4.3 mm without requiring a moving stage or lens. As a demonstration, we obtained high-resolution time-series images of S. epidermidis at 20-min intervals. We implemented an image-processing algorithm to analyze the spatiotemporal distribution of microcolonies, the development of which could be observed from a single bacterial cell. Test bacterial colonies with a minimum diameter of 20 μm could be enumerated within 6 h. We showed that our approach not only provides results that are comparable to conventional colony-counting assays but also can be used to monitor the dynamics of colony formation and growth. This microcolony-counting system using on-chip microscopy represents a new platform that substantially reduces the detection time for bacterial colony counting. It uses chip-scale image acquisition and is a simple and compact solution for the automation of colony-counting assays and microbe behavior analysis with applications in antibacterial drug discovery. PMID:26902822

  17. Discriminative and informative features for biomolecular text mining with ensemble feature selection.

    PubMed

    Van Landeghem, Sofie; Abeel, Thomas; Saeys, Yvan; Van de Peer, Yves

    2010-09-15

    In the field of biomolecular text mining, black box behavior of machine learning systems currently limits understanding of the true nature of the predictions. However, feature selection (FS) is capable of identifying the most relevant features in any supervised learning setting, providing insight into the specific properties of the classification algorithm. This allows us to build more accurate classifiers while at the same time bridging the gap between the black box behavior and the end-user who has to interpret the results. We show that our FS methodology successfully discards a large fraction of machine-generated features, improving classification performance of state-of-the-art text mining algorithms. Furthermore, we illustrate how FS can be applied to gain understanding in the predictions of a framework for biomolecular event extraction from text. We include numerous examples of highly discriminative features that model either biological reality or common linguistic constructs. Finally, we discuss a number of insights from our FS analyses that will provide the opportunity to considerably improve upon current text mining tools. The FS algorithms and classifiers are available in Java-ML (http://java-ml.sf.net). The datasets are publicly available from the BioNLP'09 Shared Task web site (http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/SharedTask/).

  18. Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures

    NASA Astrophysics Data System (ADS)

    Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain

    2018-02-01

    Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.

  19. Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition

    NASA Astrophysics Data System (ADS)

    Khayat, Omid; Afarideh, Hossein

    2013-04-01

    Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.

  20. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.

  1. A deterministic global optimization using smooth diagonal auxiliary functions

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.

    2015-04-01

    In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.

  2. Optical granulometric analysis of sedimentary deposits by color segmentation-based software: OPTGRAN-CS

    NASA Astrophysics Data System (ADS)

    Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.

    2015-12-01

    The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.

  3. Optimal Scaling of Digital Transcriptomes

    PubMed Central

    Glusman, Gustavo; Caballero, Juan; Robinson, Max; Kutlu, Burak; Hood, Leroy

    2013-01-01

    Deep sequencing of transcriptomes has become an indispensable tool for biology, enabling expression levels for thousands of genes to be compared across multiple samples. Since transcript counts scale with sequencing depth, counts from different samples must be normalized to a common scale prior to comparison. We analyzed fifteen existing and novel algorithms for normalizing transcript counts, and evaluated the effectiveness of the resulting normalizations. For this purpose we defined two novel and mutually independent metrics: (1) the number of “uniform” genes (genes whose normalized expression levels have a sufficiently low coefficient of variation), and (2) low Spearman correlation between normalized expression profiles of gene pairs. We also define four novel algorithms, one of which explicitly maximizes the number of uniform genes, and compared the performance of all fifteen algorithms. The two most commonly used methods (scaling to a fixed total value, or equalizing the expression of certain ‘housekeeping’ genes) yielded particularly poor results, surpassed even by normalization based on randomly selected gene sets. Conversely, seven of the algorithms approached what appears to be optimal normalization. Three of these algorithms rely on the identification of “ubiquitous” genes: genes expressed in all the samples studied, but never at very high or very low levels. We demonstrate that these include a “core” of genes expressed in many tissues in a mutually consistent pattern, which is suitable for use as an internal normalization guide. The new methods yield robustly normalized expression values, which is a prerequisite for the identification of differentially expressed and tissue-specific genes as potential biomarkers. PMID:24223126

  4. The addition of a mobile ultra-clean exponential laminar airflow screen to conventional operating room ventilation reduces bacterial contamination to operating box levels.

    PubMed

    Friberg, S; Ardnor, B; Lundholm, R; Friberg, B

    2003-10-01

    A mobile screen producing ultra-clean exponential laminar airflow (LAF) was investigated as an addition to conventional turbulent/mixing operating room (OR) ventilation (16 air changes/h). The evaluation was performed in a small OR (50 m(3)) during 60 standardized operations for groin hernia including mesh implantation. The additional ventilation was used in 50 of the operations. The LAF passed from the foot-end of the OR table over the instrument and surgical area. Strict hygiene OR procedures including tightly woven and non-woven OR clothing were used. Sedimentation rates were recorded at the level of the patients' chests (N=60) (i.e. the air had passed the surgical team) and in the periphery of the OR. In addition bacterial air contamination was studied above the patients' chests in all 10 operations without the additional LAF and in 12 with the LAF. The screen reduced the mean counts of sedimenting bacteria (cfu/m(2)/h) on the patients' chests from 775 without the screen to 355 (P=0.0003). The screen also reduced the mean air counts of bacteria (cfu/m(3)) above the patients' chests from 27 to 9 (P=0.0001). No significant differences in mean sedimentation rates (cfu/m(2)/h) existed in the periphery of the OR where 628 without and 574 with screen were recorded. During the follow-up period of six months no surgical site infections were detected. In conclusion when the mobile LAF screen was added to conventional OR ventilation the counts of aerobic airborne and sedimenting bacteria-carrying particles downstream of the surgical team were reduced to the levels achieved with complete ultra-clean LAF OR ventilation (operating box).

  5. Estimates of Lightning NOx Production Based on OMI NO2 Observations Over the Gulf of Mexico

    NASA Technical Reports Server (NTRS)

    Pickering, Kenneth E.; Bucsela, Eric; Allen, Dale; Ring, Allison; Holzworth, Robert; Krotkov, Nickolay

    2016-01-01

    We evaluate nitrogen oxide (NO(sub x) NO + NO2) production from lightning over the Gulf of Mexico region using data from the Ozone Monitoring Instrument (OMI) aboard NASAs Aura satellite along with detection efficiency-adjusted lightning data from the World Wide Lightning Location Network (WWLLN). A special algorithm was developed to retrieve the lightning NOx [(LNO(sub x)] signal from OMI. The algorithm in its general form takes the total slant column NO2 from OMI and removes the stratospheric contribution and tropospheric background and includes an air mass factor appropriate for the profile of lightning NO(sub x) to convert the slant column LNO2 to a vertical column of LNO(sub x). WWLLN flashes are totaled over a period of 3 h prior to OMI overpass, which is the time an air parcel is expected to remain in a 1 deg. x 1 deg. grid box. The analysis is conducted for grid cells containing flash counts greater than a threshold value of 3000 flashes that yields an expected LNO(sub x) signal greater than the background. Pixels with cloud radiance fraction greater than a criterion value (0.9) indicative of highly reflective clouds are used. Results for the summer seasons during 2007-2011 yield mean LNO(sub x) production of approximately 80 +/- 45 mol per flash over the region for the two analysis methods after accounting for biases and uncertainties in the estimation method. These results are consistent with literature estimates and more robust than many prior estimates due to the large number of storms considered but are sensitive to several substantial sources of uncertainty.

  6. Texture segmentation of non-cooperative spacecrafts images based on wavelet and fractal dimension

    NASA Astrophysics Data System (ADS)

    Wu, Kanzhi; Yue, Xiaokui

    2011-06-01

    With the increase of on-orbit manipulations and space conflictions, missions such as tracking and capturing the target spacecrafts are aroused. Unlike cooperative spacecrafts, fixing beacons or any other marks on the targets is impossible. Due to the unknown shape and geometry features of non-cooperative spacecraft, in order to localize the target and obtain the latitude, we need to segment the target image and recognize the target from the background. The data and errors during the following procedures such as feature extraction and matching can also be reduced. Multi-resolution analysis of wavelet theory reflects human beings' recognition towards images from low resolution to high resolution. In addition, spacecraft is the only man-made object in the image compared to the natural background and the differences will be certainly observed between the fractal dimensions of target and background. Combined wavelet transform and fractal dimension, in this paper, we proposed a new segmentation algorithm for the images which contains complicated background such as the universe and planet surfaces. At first, Daubechies wavelet basis is applied to decompose the image in both x axis and y axis, thus obtain four sub-images. Then, calculate the fractal dimensions in four sub-images using different methods; after analyzed the results of fractal dimensions in sub-images, we choose Differential Box Counting in low resolution image as the principle to segment the texture which has the greatest divergences between different sub-images. This paper also presents the results of experiments by using the algorithm above. It is demonstrated that an accurate texture segmentation result can be obtained using the proposed technique.

  7. Age- and sex-related variations in the brain white matter fractal dimension throughout adulthood: an MRI study.

    PubMed

    Farahibozorg, S; Hashemi-Golpayegani, S M; Ashburner, J

    2015-03-01

    To observe age- and sex-related differences in the complexity of the global and hemispheric white matter (WM) throughout adulthood by means of fractal dimension (FD). A box-counting algorithm was used to extract FD from the WM magnetic resonance images of 209 healthy adults from three structural layers, including general (gFD), skeleton (sFD), and boundaries (bFD). Model selection algorithms and statistical analyses, respectively, were used to examine the patterns and significance of the changes. gFD and sFD showed inverse U-shape patterns with aging, with a slighter slope of increase from young to mid-age and a steeper decrease to the old. bFD was less affected by age. Sex differences were evident, specifically in gFD and sFD, with men showing higher FDs. Age × sex interaction was significant mainly in the hemispheric analysis, with men undergoing sharper age-related changes. After adjusting for the volume effect, age-related results remained approximately the same, but sex differences changed in most of the features, with women indicating higher values, specifically in the left hemisphere and boundaries. Right hemisphere was still more complex in men. This study is the first that investigates the WM FD spanning adulthood, treating age both as a continuous and categorical variable. We found positive correlations between FD and volume, and our results show similarities with those investigating small-world properties of the brain networks, as well as those of functional complexity and WM integrity. These suggest that FD could yield a highly compact description of the structural changes and also might inform us about functional and cognitive variations.

  8. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  9. Hessian-LoG filtering for enhancement and detection of photoreceptor cells in adaptive optics retinal images.

    PubMed

    Lazareva, Anfisa; Liatsis, Panos; Rauscher, Franziska G

    2016-01-01

    Automated analysis of retinal images plays a vital role in the examination, diagnosis, and prognosis of healthy and pathological retinas. Retinal disorders and the associated visual loss can be interpreted via quantitative correlations, based on measurements of photoreceptor loss. Therefore, it is important to develop reliable tools for identification of photoreceptor cells. In this paper, an automated algorithm is proposed, based on the use of the Hessian-Laplacian of Gaussian filter, which allows enhancement and detection of photoreceptor cells. The performance of the proposed technique is evaluated on both synthetic and high-resolution retinal images, in terms of packing density. The results on the synthetic data were compared against ground truth as well as cone counts obtained by the Li and Roorda algorithm. For the synthetic datasets, our method showed an average detection accuracy of 98.8%, compared to 93.9% for the Li and Roorda approach. The packing density estimates calculated on the retinal datasets were validated against manual counts and the results obtained by a proprietary software from Imagine Eyes and the Li and Roorda algorithm. Among the tested methods, the proposed approach showed the closest agreement with manual counting.

  10. The BRL-CAD Package: An Overview

    DTIC Science & Technology

    2013-04-01

    many different display devices to be supported. The types of primatives supported include: arbitrary boxes of up to eight verticies, ellipsoids...file size. Many algorithms simply run until all of the data is gone, and some don’t even care about scan lines at aiL 5.2. Format Conversion Several

  11. Monitoring Oilfield Operations and GHG Emissions Sources Using Object-based Image Analysis of High Resolution Spatial Imagery

    NASA Astrophysics Data System (ADS)

    Englander, J. G.; Brodrick, P. G.; Brandt, A. R.

    2015-12-01

    Fugitive emissions from oil and gas extraction have become a greater concern with the recent increases in development of shale hydrocarbon resources. There are significant gaps in the tools and research used to estimate fugitive emissions from oil and gas extraction. Two approaches exist for quantifying these emissions: atmospheric (or 'top down') studies, which measure methane fluxes remotely, or inventory-based ('bottom up') studies, which aggregate leakage rates on an equipment-specific basis. Bottom-up studies require counting or estimating how many devices might be leaking (called an 'activity count'), as well as how much each device might leak on average (an 'emissions factor'). In a real-world inventory, there is uncertainty in both activity counts and emissions factors. Even at the well level there are significant disagreements in data reporting. For example, some prior studies noted a ~5x difference in the number of reported well completions in the United States between EPA and private data sources. The purpose of this work is to address activity count uncertainty by using machine learning algorithms to classify oilfield surface facilities using high-resolution spatial imagery. This method can help estimate venting and fugitive emissions sources from regions where reporting of oilfield equipment is incomplete or non-existent. This work will utilize high resolution satellite imagery to count well pads in the Bakken oil field of North Dakota. This initial study examines an area of ~2,000 km2 with ~1000 well pads. We compare different machine learning classification techniques, and explore the impact of training set size, input variables, and image segmentation settings to develop efficient and robust techniques identifying well pads. We discuss the tradeoffs inherent to different classification algorithms, and determine the optimal algorithms for oilfield feature detection. In the future, the results of this work will be leveraged to be provide activity counts of oilfield surface equipment including tanks, pumpjacks, and holding ponds.

  12. Energy Expenditure and Intensity of Active Video Games in Children and Adolescents.

    PubMed

    Canabrava, Karina L R; Faria, Fernanda R; Lima, Jorge R P de; Guedes, Dartagnan P; Amorim, Paulo R S

    2018-03-01

    This study aimed to compare the energy expenditure and intensity of active video games to that of treadmill walking in children and adolescents. Seventy-two boys and girls (aged 8-13 years) were recruited from local public schools. Energy expenditure and heart rate were measured during rest, during 3-km/hr, 4-km/hr, and 5-km/hr walks, and during active games (Adventure, Boxing I, Boxing II, and Dance). During walking and active games, we also assessed physical activity using an accelerometer. The energy expenditure of the active games Adventure, Boxing I, Boxing II, and Dance was similar to that of treadmill walking at 5 km/hr in boys and girls. Heart rate was significantly higher for the game Adventure compared with walking at 3 km/hr, 4 km/hr, and 5 km/hr and the game Dance in both genders. The heart rate of girls during the games Adventure and Dance was significantly higher compared with boys. There was a statistically significant difference (p < .05, with an effect size ranging from 0.40 to 3.54) in the counts·min -1 , measured through accelerometry, between activities. XBOX 360 Kinect games provide energy expenditure and physical activity of moderate intensity for both genders. The use of active video games can be an interesting alternative to increase physical activity levels.

  13. SU-C-BRA-04: Automated Segmentation of Head-And-Neck CT Images for Radiotherapy Treatment Planning Via Multi-Atlas Machine Learning (MAML)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  14. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M D; Cole, S; Frenk, C S

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less

  15. High Resolution Monthly Oceanic Rainfall Based on Microwave Brightness Temperature Histograms

    NASA Astrophysics Data System (ADS)

    Shin, D.; Chiu, L. S.

    2005-12-01

    A statistical emission-based passive microwave retrieval algorithm has been developed by Wilheit, Chang and Chiu (1991) to estimate space/time oceanic rainfall. The algorithm has been applied to Special Sensor Microwave Imager (SSM/I) data taken on board the Defense Meteorological Satellite Program (DMSP) satellites to provide monthly oceanic rainfall over 2.5ox2.5o and 5ox5o latitude-longitude boxes by the Global Precipitation Climatology Project-Polar Satellite Precipitation Data Center (GPCP-PSPDC, URL: http://gpcp-pspdc.gmu.edu/) as part of NASA's contribution to the GPCP. The algorithm has been modified and applied to the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) data to produce a TRMM Level 3 standard product (3A11) over 5ox5o latitude/longitude boxes. In this study, the algorithm code is modified to retrieve rain rates at 2.5ox2.5o and 1ox1o resolutions for TMI. Two months of TMI data have been tested and the results compared with the monthly mean rain rates derived from TRMM Level 2 TMI rain profile algorithm (2A12) and the original 5ox5o data from 3A11. The rainfall pattern is very similar to the monthly average of 2A12, although the intensity is slightly higher. Details in the rain pattern, such as rain shadow due to island blocking, which were not discernible from the low resolution products, are now easily discernible. The spatial average of the higher resolution rain rates are in general slightly higher than lower resolution rain rates, although a Student-t test shows no significant difference. This high resolution product will be useful for the calibration of IR rain estimates for the production of the GPCP merge rain product.

  16. Biclustering as a method for RNA local multiple sequence alignment.

    PubMed

    Wang, Shu; Gutell, Robin R; Miranker, Daniel P

    2007-12-15

    Biclustering is a clustering method that simultaneously clusters both the domain and range of a relation. A challenge in multiple sequence alignment (MSA) is that the alignment of sequences is often intended to reveal groups of conserved functional subsequences. Simultaneously, the grouping of the sequences can impact the alignment; precisely the kind of dual situation biclustering is intended to address. We define a representation of the MSA problem enabling the application of biclustering algorithms. We develop a computer program for local MSA, BlockMSA, that combines biclustering with divide-and-conquer. BlockMSA simultaneously finds groups of similar sequences and locally aligns subsequences within them. Further alignment is accomplished by dividing both the set of sequences and their contents. The net result is both a multiple sequence alignment and a hierarchical clustering of the sequences. BlockMSA was tested on the subsets of the BRAliBase 2.1 benchmark suite that display high variability and on an extension to that suite to larger problem sizes. Also, alignments were evaluated of two large datasets of current biological interest, T box sequences and Group IC1 Introns. The results were compared with alignments computed by ClustalW, MAFFT, MUCLE and PROBCONS alignment programs using Sum of Pairs (SPS) and Consensus Count. Results for the benchmark suite are sensitive to problem size. On problems of 15 or greater sequences, BlockMSA is consistently the best. On none of the problems in the test suite are there appreciable differences in scores among BlockMSA, MAFFT and PROBCONS. On the T box sequences, BlockMSA does the most faithful job of reproducing known annotations. MAFFT and PROBCONS do not. On the Intron sequences, BlockMSA, MAFFT and MUSCLE are comparable at identifying conserved regions. BlockMSA is implemented in Java. Source code and supplementary datasets are available at http://aug.csres.utexas.edu/msa/

  17. Bringing Algorithms to Life: Cooperative Computing Activities Using Students as Processors.

    ERIC Educational Resources Information Center

    Bachelis, Gregory F.; And Others

    1994-01-01

    Presents cooperative computing activities in which each student plays the role of a switch or processor and acts out algorithms. Includes binary counting, finding the smallest card in a deck, sorting by selection and merging, adding and multiplying large numbers, and sieving for primes. (16 references) (Author/MKR)

  18. The use of portable 2D echocardiography and 'frame-based' bubble counting as a tool to evaluate diving decompression stress.

    PubMed

    Germonpré, Peter; Papadopoulou, Virginie; Hemelryck, Walter; Obeid, Georges; Lafère, Pierre; Eckersley, Robert J; Tang, Meng-Xing; Balestra, Costantino

    2014-03-01

    'Decompression stress' is commonly evaluated by scoring circulating bubble numbers post dive using Doppler or cardiac echography. This information may be used to develop safer decompression algorithms, assuming that the lower the numbers of venous gas emboli (VGE) observed post dive, the lower the statistical risk of decompression sickness (DCS). Current echocardiographic evaluation of VGE, using the Eftedal and Brubakk method, has some disadvantages as it is less well suited for large-scale evaluation of recreational diving profiles. We propose and validate a new 'frame-based' VGE-counting method which offers a continuous scale of measurement. Nine 'raters' of varying familiarity with echocardiography were asked to grade 20 echocardiograph recordings using both the Eftedal and Brubakk grading and the new 'frame-based' counting method. They were also asked to count the number of bubbles in 50 still-frame images, some of which were randomly repeated. A Wilcoxon Spearman ρ calculation was used to assess test-retest reliability of each rater for the repeated still frames. For the video images, weighted kappa statistics, with linear and quadratic weightings, were calculated to measure agreement between raters for the Eftedal and Brubakk method. Bland-Altman plots and intra-class correlation coefficients were used to measure agreement between raters for the frame-based counting method. Frame-based counting showed a better inter-rater agreement than the Eftedal and Brubakk grading, even with relatively inexperienced assessors, and has good intra- and inter-rater reliability. Frame-based bubble counting could be used to evaluate post-dive decompression stress, and offers possibilities for computer-automated algorithms to allow near-real-time counting.

  19. MDTS: automatic complex materials design using Monte Carlo tree search.

    PubMed

    M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-01-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  20. MDTS: automatic complex materials design using Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Dieb, Thaer M.; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji

    2017-12-01

    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.

  1. Proceedings of USC (University of Southern California) Workshop on VLSI (Very Large Scale Integration) & Modern Signal Processing, held at Los Angeles, California on 1-3 November 1982

    DTIC Science & Technology

    1983-11-15

    Concurrent Algorithms", A. Cremers , Dortmund University, West Germany, and T. Hibbard, JPL, Pasadena, CA 64 "An Overview of Signal Representations in...n O f\\ n O P- A -> Problem-oriented specification of concurrent algorithms Armin B. Cremers and Thomas N. Hibbard Preliminary version September...1982 s* Armin B. Cremers Computer Science Department University of Dortmund P.O. Box 50 05 00 D-4600 Dortmund 50 Fed. Rep. Germany

  2. Heavy quark radiation in NLO+PS POWHEG generators

    NASA Astrophysics Data System (ADS)

    Buonocore, Luca; Nason, Paolo; Tramontano, Francesco

    2018-02-01

    In this paper we deal with radiation from heavy quarks in the context of next-to-leading order calculations matched to parton shower generators. A new algorithm for radiation from massive quarks is presented that has considerable advantages over the one previously employed. We implement the algorithm in the framework of the POWHEG-BOX, and compare it with the previous one in the case of the hvq generator for bottom production in hadronic collisions, and in the case of the bb4l generator for top production and decay.

  3. Heuristic Algorithms for Solving Two Dimensional Loading Problems.

    DTIC Science & Technology

    1981-03-01

    L6i MICROCOPY RESOLUTION TEST CHART WTI0WAL BL4WA64OF STANDARDS- 1963-A -~~ le -I I ~- A-LA4C TEC1-NlCAL ’c:LJ? HEURISTIC ALGORITHMS FOR SOLVING...CONSIDER THE FOLLOWjING PROBLEM; ALLOCATE A SET OF ON’ DOXES, EACH HAVING A SPECIFIED LENGTH, WIDTH AND HEIGHT, TO A PALLET OF LENGTH " Le AND WIDTH "W...THE BOXES AND TI-EN-SELECT TI- lE BEST SOLUTION. SINCE THESE HEURISTICS ARE ESSENTIALLY A TRIAL AND ERROR PROCEDURE THEIR FORMULAS BECOME VERY

  4. Qutrit witness from the Grothendieck constant of order four

    NASA Astrophysics Data System (ADS)

    Diviánszky, Péter; Bene, Erika; Vértesi, Tamás

    2017-07-01

    In this paper, we prove that KG(3 )

  5. High-dimensional cluster analysis with the Masked EM Algorithm

    PubMed Central

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  6. A mower detector to judge soil sorting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramlitt, E.T.; Johnson, N.R.

    1995-12-31

    Thermo Nuclear Services (TNS) has developed a mower detector as an inexpensive and fast means for deciding potential value of soil sorting for cleanup. It is a shielded detector box on wheels pushed over the ground (as a person mows grass) at 30 ft/min with gamma-ray counts recorded every 0.25 sec. It mirror images detection by the TNS transportable sorter system which conveys soil at 30 ft/min and toggles a gate to send soil on separate paths based on counts. The mower detector shows if contamination is variable and suitable for sorting, and by unique calibration sources, it indicates detectionmore » sensitivity. The mower detector has been used to characterize some soil at Department of Energy sites in New Jersey and South Carolina.« less

  7. Methodology for creating dedicated machine and algorithm on sunflower counting

    NASA Astrophysics Data System (ADS)

    Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand

    2007-09-01

    In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.

  8. Surface-Height Determination of Crevassed Glaciers-Mathematical Principles of an Autoadaptive Density-Dimension Algorithm and Validation Using ICESat-2 Simulator (SIMPL) Data

    NASA Technical Reports Server (NTRS)

    Herzfeld, Ute C.; Trantow, Thomas M.; Harding, David; Dabney, Philip W.

    2017-01-01

    Glacial acceleration is a main source of uncertainty in sea-level-change assessment. Measurement of ice-surface heights with a spatial and temporal resolution that not only allows elevation-change calculation, but also captures ice-surface morphology and its changes is required to aid in investigations of the geophysical processes associated with glacial acceleration.The Advanced Topographic Laser Altimeter System aboard NASAs future ICESat-2 Mission (launch 2017) will implement multibeam micropulse photon-counting lidar altimetry aimed at measuring ice-surface heights at 0.7-m along-track spacing. The instrument is designed to resolve spatial and temporal variability of rapidly changing glaciers and ice sheets and the Arctic sea ice. The new technology requires the development of a new mathematical algorithm for the retrieval of height information.We introduce the density-dimension algorithm (DDA) that utilizes the radial basis function to calculate a weighted density as a form of data aggregation in the photon cloud and considers density an additional dimension as an aid in auto-adaptive threshold determination. The auto-adaptive capability of the algorithm is necessary to separate returns from noise and signal photons under changing environmental conditions. The algorithm is evaluated using data collected with an ICESat-2 simulator instrument, the Slope Imaging Multi-polarization Photon-counting Lidar, over the heavily crevassed Giesecke Braer in Northwestern Greenland in summer 2015. Results demonstrate that ICESat-2 may be expected to provide ice-surface height measurements over crevassed glaciers and other complex ice surfaces. The DDA is generally applicable for the analysis of airborne and spaceborne micropulse photon-counting lidar data over complex and simple surfaces.

  9. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  10. Locator-Checker-Scaler Object Tracking Using Spatially Ordered and Weighted Patch Descriptor.

    PubMed

    Kim, Han-Ul; Kim, Chang-Su

    2017-08-01

    In this paper, we propose a simple yet effective object descriptor and a novel tracking algorithm to track a target object accurately. For the object description, we divide the bounding box of a target object into multiple patches and describe them with color and gradient histograms. Then, we determine the foreground weight of each patch to alleviate the impacts of background information in the bounding box. To this end, we perform random walk with restart (RWR) simulation. We then concatenate the weighted patch descriptors to yield the spatially ordered and weighted patch (SOWP) descriptor. For the object tracking, we incorporate the proposed SOWP descriptor into a novel tracking algorithm, which has three components: locator, checker, and scaler (LCS). The locator and the scaler estimate the center location and the size of a target, respectively. The checker determines whether it is safe to adjust the target scale in a current frame. These three components cooperate with one another to achieve robust tracking. Experimental results demonstrate that the proposed LCS tracker achieves excellent performance on recent benchmarks.

  11. A novel handheld device for use in remote patient monitoring of heart failure patients--design and preliminary validation on healthy subjects.

    PubMed

    Pollonini, Luca; Rajan, Nithin O; Xu, Shuai; Madala, Sridhar; Dacso, Clifford C

    2012-04-01

    Remote patient monitoring (RPM) holds great promise for reducing the burden of congestive heart failure (CHF). Improved sensor technology and effective predictive algorithms can anticipate sudden decompensation events. Enhanced telemonitoring systems would promote patient independence and facilitate communication between patients and their physicians. We report the development of a novel hand-held device, called Blue Box, capable of collecting and wirelessly transmitting key cardiac parameters derived from three integrated biosensors: 2 lead electrocardiogram (ECG), photoplethysmography and bioelectrical impedance (bioimpedance). Blue Box measurements include time intervals between consecutive ECG R-waves (RR interval), time duration of the ECG complex formed by the Q, R and S waves (QRS duration), bioimpedance, heart rate and systolic time intervals. In this study, we recruited 24 healthy subjects to collect several parameters measured by Blue Box and assess their value in correlating with cardiac output measured with Echo-Doppler. Linear correlation between the heart rate measured with Blue Box and cardiac output from Echo-Doppler had a group average correlation coefficient of 0.80. We found that systolic time intervals did not improve the model significantly. However, STIs did inversely correlate with increasing workloads.

  12. The eastern box turtle at the Patuxent Wildlife Research Center 1940s to the present: another view

    USGS Publications Warehouse

    Henry, P.F.P.

    2003-01-01

    Several long-term mark recapture studies have been conducted on box turtles (Terrapene c. carolina) providing valuable information on life span, basic demography, home range, and apparent effects of environmental changes on box turtle survival. One of the longest studied populations was first marked in 1942 on the Patuxent Wildlife Research Center in Maryland, and has been surveyed every 10 years until 1995. The age structure and gender ratio of these turtles in the field may support differential habitat use and survival estimates. A few of the turtles first marked during the 1945 study are still observed throughout the Center. Data collected from turtles marked in the more upland habitats during 1985-2002 indicate a younger age class distribution than that observed in the more protected biota of the bottomland floodplain study area. Extrapolating ages of turtles described in data collected throughout the long-term study, it was estimated that turtles, both males and females, can show reproduction-intent behaviors at ages greater than 54 years old. It is suggested that count data collection be continued on a more frequent cycle, extending over a larger part of the Center.

  13. The technological obsolescence of the Brazilian eletronic ballot box.

    PubMed

    Camargo, Carlos Rogério; Faust, Richard; Merino, Eugênio; Stefani, Clarissa

    2012-01-01

    The electronic ballot box has played a significant role in the consolidation of Brazilian political process. It has enabled paper ballots extinction as a support for the elector's vote as well as for voting counting processes. It is also widely known that election automation has decisively collaborated to the legitimization of Brazilian democracy, getting rid of doubts about the winning candidates. In 1995, when the project was conceived, it represented a compromise solution, balancing technical efficiency and costs trade-offs. However, this architecture currently limits the ergonomic enhancements to the device operation, transportation, maintenance and storage. Nowadays are available in the market devices of reduced dimensions, based on novel computational architecture, namely tablet computers, which emphasizes usability, autonomy, portability, security and low power consumption. Therefore, the proposal under discussion is the replacement of the current electronic ballot boxes for tablet-based devices to improve the ergonomics aspects of the Brazilian voting process. These devices offer a plethora of integrated features (e.g., capacitive touchscreen, speakers, microphone) that enable highly usable and simple user interfaces, in addition to enhancing the voting process security mechanisms. Finally, their operational systems features allow for the development of highly secure applications, suitable to the requirements of a voting process.

  14. Automatic image analysis and spot classification for detection of fruit fly infestation in hyperspectral images of mangoes

    USDA-ARS?s Scientific Manuscript database

    An algorithm has been developed to identify spots generated in hyperspectral images of mangoes infested with fruit fly larvae. The algorithm incorporates background removal, application of a Gaussian blur, thresholding, and particle count analysis to identify locations of infestations. Each of the f...

  15. SPMBR: a scalable algorithm for mining sequential patterns based on bitmaps

    NASA Astrophysics Data System (ADS)

    Xu, Xiwei; Zhang, Changhai

    2013-12-01

    Now some sequential patterns mining algorithms generate too many candidate sequences, and increase the processing cost of support counting. Therefore, we present an effective and scalable algorithm called SPMBR (Sequential Patterns Mining based on Bitmap Representation) to solve the problem of mining the sequential patterns for large databases. Our method differs from previous related works of mining sequential patterns. The main difference is that the database of sequential patterns is represented by bitmaps, and a simplified bitmap structure is presented firstly. In this paper, First the algorithm generate candidate sequences by SE(Sequence Extension) and IE(Item Extension), and then obtain all frequent sequences by comparing the original bitmap and the extended item bitmap .This method could simplify the problem of mining the sequential patterns and avoid the high processing cost of support counting. Both theories and experiments indicate that the performance of SPMBR is predominant for large transaction databases, the required memory size for storing temporal data is much less during mining process, and all sequential patterns can be mined with feasibility.

  16. Enumerating Substituted Benzene Isomers of Tree-Like Chemical Graphs.

    PubMed

    Li, Jinghui; Nagamochi, Hiroshi; Akutsu, Tatsuya

    2018-01-01

    Enumeration of chemical structures is useful for drug design, which is one of the main targets of computational biology and bioinformatics. A chemical graph with no other cycles than benzene rings is called tree-like, and becomes a tree possibly with multiple edges if we contract each benzene ring into a single virtual atom of valence 6. All tree-like chemical graphs with a given tree representation are called the substituted benzene isomers of . When we replace each virtual atom in with a benzene ring to obtain a substituted benzene isomer, distinct isomers of are caused by the difference in arrangements of atom groups around a benzene ring. In this paper, we propose an efficient algorithm that enumerates all substituted benzene isomers of a given tree representation . Our algorithm first counts the number of all the isomers of the tree representation by a dynamic programming method. To enumerate all the isomers, for each , our algorithm then generates the th isomer by backtracking the counting phase of the dynamic programming. We also implemented our algorithm for computational experiments.

  17. A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1996-02-01

    The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.

  18. Beta/alpha continuous air monitor

    DOEpatents

    Becker, Gregory K.; Martz, Dowell E.

    1989-01-01

    A single deep layer silicon detector in combination with a microcomputer, recording both alpha and beta activity and the energy of each pulse, distinguishing energy peaks using a novel curve fitting technique to reduce the natural alpha counts in the energy region where plutonium and other transuranic alpha emitters are present, and using a novel algorithm to strip out radon daughter contribution to actual beta counts.

  19. Connectivity algorithm with depth first search (DFS) on simple graphs

    NASA Astrophysics Data System (ADS)

    Riansanti, O.; Ihsan, M.; Suhaimi, D.

    2018-01-01

    This paper discusses an algorithm to detect connectivity of a simple graph using Depth First Search (DFS). The DFS implementation in this paper differs than other research, that is, on counting the number of visited vertices. The algorithm obtains s from the number of vertices and visits source vertex, following by its adjacent vertices until the last vertex adjacent to the previous source vertex. Any simple graph is connected if s equals 0 and disconnected if s is greater than 0. The complexity of the algorithm is O(n2).

  20. Simulation of the Predictive Control Algorithm for Container Crane Operation using Matlab Fuzzy Logic Tool Box

    NASA Technical Reports Server (NTRS)

    Richardson, Albert O.

    1997-01-01

    This research has investigated the use of fuzzy logic, via the Matlab Fuzzy Logic Tool Box, to design optimized controller systems. The engineering system for which the controller was designed and simulate was the container crane. The fuzzy logic algorithm that was investigated was the 'predictive control' algorithm. The plant dynamics of the container crane is representative of many important systems including robotic arm movements. The container crane that was investigated had a trolley motor and hoist motor. Total distance to be traveled by the trolley was 15 meters. The obstruction height was 5 meters. Crane height was 17.8 meters. Trolley mass was 7500 kilograms. Load mass was 6450 kilograms. Maximum trolley and rope velocities were 1.25 meters per sec. and 0.3 meters per sec., respectively. The fuzzy logic approach allowed the inclusion, in the controller model, of performance indices that are more effectively defined in linguistic terms. These include 'safety' and 'cargo swaying'. Two fuzzy inference systems were implemented using the Matlab simulation package, namely the Mamdani system (which relates fuzzy input variables to fuzzy output variables), and the Sugeno system (which relates fuzzy input variables to crisp output variable). It is found that the Sugeno FIS is better suited to including aspects of those plant dynamics whose mathematical relationships can be determined.

  1. Quantum algorithms on Walsh transform and Hamming distance for Boolean functions

    NASA Astrophysics Data System (ADS)

    Xie, Zhengwei; Qiu, Daowen; Cai, Guangya

    2018-06-01

    Walsh spectrum or Walsh transform is an alternative description of Boolean functions. In this paper, we explore quantum algorithms to approximate the absolute value of Walsh transform W_f at a single point z0 (i.e., |W_f(z0)|) for n-variable Boolean functions with probability at least 8/π 2 using the number of O(1/|W_f(z_{0)|ɛ }) queries, promised that the accuracy is ɛ , while the best known classical algorithm requires O(2n) queries. The Hamming distance between Boolean functions is used to study the linearity testing and other important problems. We take advantage of Walsh transform to calculate the Hamming distance between two n-variable Boolean functions f and g using O(1) queries in some cases. Then, we exploit another quantum algorithm which converts computing Hamming distance between two Boolean functions to quantum amplitude estimation (i.e., approximate counting). If Ham(f,g)=t≠0, we can approximately compute Ham( f, g) with probability at least 2/3 by combining our algorithm and {Approx-Count(f,ɛ ) algorithm} using the expected number of Θ( √{N/(\\lfloor ɛ t\\rfloor +1)}+√{t(N-t)}/\\lfloor ɛ t\\rfloor +1) queries, promised that the accuracy is ɛ . Moreover, our algorithm is optimal, while the exact query complexity for the above problem is Θ(N) and the query complexity with the accuracy ɛ is O(1/ɛ 2N/(t+1)) in classical algorithm, where N=2n. Finally, we present three exact quantum query algorithms for two promise problems on Hamming distance using O(1) queries, while any classical deterministic algorithm solving the problem uses Ω(2n) queries.

  2. High Resolution Gamma Ray Spectroscopy at MHz Counting Rates With LaBr3 Scintillators for Fusion Plasma Applications

    NASA Astrophysics Data System (ADS)

    Nocente, M.; Tardocchi, M.; Olariu, A.; Olariu, S.; Pereira, R. C.; Chugunov, I. N.; Fernandes, A.; Gin, D. B.; Grosso, G.; Kiptily, V. G.; Neto, A.; Shevelev, A. E.; Silva, M.; Sousa, J.; Gorini, G.

    2013-04-01

    High resolution γ-ray spectroscopy measurements at MHz counting rates were carried out at nuclear accelerators, combining a LaBr 3(Ce) detector with dedicated hardware and software solutions based on digitization and off-line analysis. Spectra were measured at counting rates up to 4 MHz, with little or no degradation of the energy resolution, adopting a pile up rejection algorithm. The reported results represent a step forward towards the final goal of high resolution γ-ray spectroscopy measurements on a burning plasma device.

  3. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  4. PageRank as a method to rank biomedical literature by importance.

    PubMed

    Yates, Elliot J; Dixon, Louise C

    2015-01-01

    Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P < 0.01) and we thus validate the former as a surrogate of literature importance. Furthermore, the algorithm can be run in trivial time on cheap, commodity cluster hardware, lowering the barrier of entry for resource-limited open access organisations. PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.

  5. Estimating the vegetation canopy height using micro-pulse photon-counting LiDAR data.

    PubMed

    Nie, Sheng; Wang, Cheng; Xi, Xiaohuan; Luo, Shezhou; Li, Guoyuan; Tian, Jinyan; Wang, Hongtao

    2018-05-14

    The upcoming space-borne LiDAR satellite Ice, Cloud and land Elevation Satellite-2 (ICESat-2) is scheduled to launch in 2018. Different from the waveform LiDAR system onboard the ICESat, ICESat-2 will use a micro-pulse photon-counting LiDAR system. Thus new data processing algorithms are required to retrieve vegetation canopy height from photon-counting LiDAR data. The objective of this paper is to develop and validate an automated approach for better estimating vegetation canopy height. The new proposed method consists of three key steps: 1) filtering out the noise photons by an effective noise removal algorithm based on localized statistical analysis; 2) separating ground returns from canopy returns using an iterative photon classification algorithm, and then determining ground surface; 3) generating canopy-top surface and calculating vegetation canopy height based on canopy-top and ground surfaces. This automatic vegetation height estimation approach was tested to the simulated ICESat-2 data produced from Sigma Space LiDAR data and Multiple Altimeter Beam Experimental LiDAR (MABEL) data, and the retrieved vegetation canopy heights were validated by canopy height models (CHMs) derived from airborne discrete-return LiDAR data. Results indicated that the estimated vegetation canopy heights have a relatively strong correlation with the reference vegetation heights derived from airborne discrete-return LiDAR data (R 2 and RMSE values ranging from 0.639 to 0.810 and 4.08 m to 4.56 m respectively). This means our new proposed approach is appropriate for retrieving vegetation canopy height from micro-pulse photon-counting LiDAR data.

  6. A novel encryption scheme for high-contrast image data in the Fresnelet domain

    PubMed Central

    Bibi, Nargis; Farwa, Shabieh; Jahngir, Adnan; Usman, Muhammad

    2018-01-01

    In this paper, a unique and more distinctive encryption algorithm is proposed. This is based on the complexity of highly nonlinear S box in Flesnelet domain. The nonlinear pattern is transformed further to enhance the confusion in the dummy data using Fresnelet technique. The security level of the encrypted image boosts using the algebra of Galois field in Fresnelet domain. At first level, the Fresnelet transform is used to propagate the given information with desired wavelength at specified distance. It decomposes given secret data into four complex subbands. These complex sub-bands are separated into two components of real subband data and imaginary subband data. At second level, the net subband data, produced at the first level, is deteriorated to non-linear diffused pattern using the unique S-box defined on the Galois field F28. In the diffusion process, the permuted image is substituted via dynamic algebraic S-box substitution. We prove through various analysis techniques that the proposed scheme enhances the cipher security level, extensively. PMID:29608609

  7. Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions.

    PubMed

    Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios

    2017-03-01

    Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging, because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose to use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available data sets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.

  8. EM Adaptive LASSO—A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes

    PubMed Central

    Mallick, Himel; Tiwari, Hemant K.

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062

  9. EM Adaptive LASSO-A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes.

    PubMed

    Mallick, Himel; Tiwari, Hemant K

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.

  10. An image-space parallel convolution filtering algorithm based on shadow map

    NASA Astrophysics Data System (ADS)

    Li, Hua; Yang, Huamin; Zhao, Jianping

    2017-07-01

    Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.

  11. A new approach for handling longitudinal count data with zero-inflation and overdispersion: poisson geometric process model.

    PubMed

    Wan, Wai-Yin; Chan, Jennifer S K

    2009-08-01

    For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).

  12. Select metal and metalloid surveillance of free-ranging Eastern box turtles from Illinois and Tennessee (Terrapene carolina carolina).

    PubMed

    Allender, Matthew C; Dreslik, Michael J; Patel, Bishap; Luber, Elizabeth L; Byrd, John; Phillips, Christopher A; Scott, John W

    2015-08-01

    The Eastern box turtle (Terrapene carolina carolina) is a primarily terrestrial chelonian distributed across the eastern US. It has been proposed as a biomonitor due to its longevity, small home range, and reliance on the environment to meet its metabolic needs. Plasma samples from 273 free-ranging box turtles from populations in Tennessee and Illinois in 2011 and 2012 were evaluated for presence of heavy metals and to characterize hematologic variables. Lead (Pb), arsenic (As), zinc (Zn), chromium (Cr), selenium (Se), and copper (Cu) were detected, while cadmium (Cd) and silver (Ag) were not. There were no differences in any metal detected among age class or sex. However, Cr and Pb were higher in turtles from Tennessee, while As, Zn, Se, and Cu were higher in turtles from Illinois. Seasonal differences in metal concentrations were observed for Cr, Zn, and As. Health of turtles was assessed using hematologic variables. Packed cell volume was positively correlated with Cu, Se, and Pb in Tennessee. Total solids, a measure of plasma proteins, in Tennessee turtles were positively correlated with Cu and Zn. White blood cell count, a measure of inflammation, in Tennessee turtles was negatively correlated with Cu and As, and positively correlated with Pb. Metals are a threat to human health and the health of an ecosystem, and the Eastern Box Turtle can serve as a monitor of these contaminants. Differences established in this study can serve as baseline for future studies of these or related populations.

  13. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    DOE PAGES

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; ...

    2017-01-13

    The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newermore » algorithms can be better than the ones currently used.« less

  14. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian

    2017-01-13

    The aim of this paper is to compare radioxenon beta-gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the Minimum Detectable Counts (MDC) for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Our results show that existing algorithms can be improved and some newermore » algorithms can be better than the currently used ones.« less

  15. Comparison of algorithms for computing the two-dimensional discrete Hartley transform

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Burton, John C.; Miller, Keith W.

    1989-01-01

    Three methods have been described for computing the two-dimensional discrete Hartley transform. Two of these employ a separable transform, the third method, the vector-radix algorithm, does not require separability. In-place computation of the vector-radix method is described. Operation counts and execution times indicate that the vector-radix method is fastest.

  16. Hop limited epidemic-like information spreading in mobile social networks with selfish nodes

    NASA Astrophysics Data System (ADS)

    Wu, Yahui; Deng, Su; Huang, Hongbin

    2013-07-01

    Similar to epidemics, information can be transmitted directly among users in mobile social networks. Different from epidemics, we can control the spreading process by adjusting the corresponding parameters (e.g., hop count) directly. This paper proposes a theoretical model to evaluate the performance of an epidemic-like spreading algorithm, in which the maximal hop count of the information is limited. In addition, our model can be used to evaluate the impact of users’ selfish behavior. Simulations show the accuracy of our theoretical model. Numerical results show that the information hop count can have an important impact. In addition, the impact of selfish behavior is related to the information hop count.

  17. Monte Carlo sampling in diffusive dynamical systems

    NASA Astrophysics Data System (ADS)

    Tapias, Diego; Sanders, David P.; Altmann, Eduardo G.

    2018-05-01

    We introduce a Monte Carlo algorithm to efficiently compute transport properties of chaotic dynamical systems. Our method exploits the importance sampling technique that favors trajectories in the tail of the distribution of displacements, where deviations from a diffusive process are most prominent. We search for initial conditions using a proposal that correlates states in the Markov chain constructed via a Metropolis-Hastings algorithm. We show that our method outperforms the direct sampling method and also Metropolis-Hastings methods with alternative proposals. We test our general method through numerical simulations in 1D (box-map) and 2D (Lorentz gas) systems.

  18. The controlled growth method - A tool for structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Sobieszczanski-Sobieski, J.

    1981-01-01

    An adaptive design variable linking scheme in a NLP based optimization algorithm is proposed and evaluated for feasibility of application. The present scheme, based on an intuitive effectiveness measure for each variable, differs from existing methodology in that a single dominant variable controls the growth of all others in a prescribed optimization cycle. The proposed method is implemented for truss assemblies and a wing box structure for stress, displacement and frequency constraints. Substantial reduction in computational time, even more so for structures under multiple load conditions, coupled with a minimal accompanying loss in accuracy, vindicates the algorithm.

  19. Illuminating the Black Box of Genome Sequence Assembly: A Free Online Tool to Introduce Students to Bioinformatics

    ERIC Educational Resources Information Center

    Taylor, D. Leland; Campbell, A. Malcolm; Heyer, Laurie J.

    2013-01-01

    Next-generation sequencing technologies have greatly reduced the cost of sequencing genomes. With the current sequencing technology, a genome is broken into fragments and sequenced, producing millions of "reads." A computer algorithm pieces these reads together in the genome assembly process. PHAST is a set of online modules…

  20. Beta/alpha continuous air monitor

    DOEpatents

    Becker, G.K.; Martz, D.E.

    1988-06-27

    A single deep layer silicon detector in combination with a microcomputer, recording both alpha and beta activity and the energy of each pulse, distinquishing energy peaks using a novel curve fitting technique to reduce the natural alpha counts in the energy region where plutonium and other transuranic alpha emitters are present, and using a novel algorithm to strip out radon daughter contribution to actual beta counts. 7 figs.

  1. "How do you know those particles are from cigarettes?": An algorithm to help differentiate second-hand tobacco smoke from background sources of household fine particulate matter.

    PubMed

    Dobson, Ruaraidh; Semple, Sean

    2018-06-18

    Second-hand smoke (SHS) at home is a target for public health interventions, such as air quality feedback interventions using low-cost particle monitors. However, these monitors also detect fine particles generated from non-SHS sources. The Dylos DC1700 reports particle counts in the coarse and fine size ranges. As tobacco smoke produces far more fine particles than coarse ones, and tobacco is generally the greatest source of particulate pollution in a smoking home, the ratio of coarse to fine particles may provide a useful method to identify the presence of SHS in homes. An algorithm was developed to differentiate smoking from smoke-free homes. Particle concentration data from 116 smoking homes and 25 non-smoking homes were used to test this algorithm. The algorithm correctly classified the smoking status of 135 of the 141 homes (96%), comparing favourably with a test of mean mass concentration. Applying this algorithm to Dylos particle count measurements may help identify the presence of SHS in homes or other indoor environments. Future research should adapt it to detect individual smoking periods within a 24 h or longer measurement period. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. ChIP-PaM: an algorithm to identify protein-DNA interaction using ChIP-Seq data.

    PubMed

    Wu, Song; Wang, Jianmin; Zhao, Wei; Pounds, Stanley; Cheng, Cheng

    2010-06-03

    ChIP-Seq is a powerful tool for identifying the interaction between genomic regulators and their bound DNAs, especially for locating transcription factor binding sites. However, high cost and high rate of false discovery of transcription factor binding sites identified from ChIP-Seq data significantly limit its application. Here we report a new algorithm, ChIP-PaM, for identifying transcription factor target regions in ChIP-Seq datasets. This algorithm makes full use of a protein-DNA binding pattern by capitalizing on three lines of evidence: 1) the tag count modelling at the peak position, 2) pattern matching of a specific tag count distribution, and 3) motif searching along the genome. A novel data-based two-step eFDR procedure is proposed to integrate the three lines of evidence to determine significantly enriched regions. Our algorithm requires no technical controls and efficiently discriminates falsely enriched regions from regions enriched by true transcription factor (TF) binding on the basis of ChIP-Seq data only. An analysis of real genomic data is presented to demonstrate our method. In a comparison with other existing methods, we found that our algorithm provides more accurate binding site discovery while maintaining comparable statistical power.

  3. Lens-free microscopy of cerebrospinal fluid for the laboratory diagnosis of meningitis

    NASA Astrophysics Data System (ADS)

    Delacroix, Robin; Morel, Sophie Nhu An; Hervé, Lionel; Bordy, Thomas; Blandin, Pierre; Dinten, Jean-Marc; Drancourt, Michel; Allier, Cédric

    2018-02-01

    The cytology of the cerebrospinal fluid is traditionally performed by an operator (physician, biologist) by means of a conventional light microscope. The operator visually counts the leukocytes (white blood cells) present in a sample of cerebrospinal fluid (10 μl). It is a tedious job and the result is operator-dependent. Here in order to circumvent the limitations of manual counting, we approach the question of numeration of erythrocytes and leukocytes for the cytological diagnosis of meningitis by means of lens-free microscopy. In a first step, a prospective counts of leukocytes was performed by five different operators using conventional optical microscopy. The visual counting yielded an overall 16.7% misclassification of 72 cerebrospinal fluid specimens in meningitis/non-meningitis categories using a 10 leukocyte/μL cut-off. In a second step, the lens-free microscopy algorithm was adapted step-by-step for counting cerebrospinal fluid cells and discriminating leukocytes from erythrocytes. The optimization of the automatic lens-free counting was based on the prospective analysis of 215 cerebrospinal fluid specimens. The optimized algorithm yielded a 100% sensitivity and a 86% specificity compared to confirmed diagnostics. In a third step, a blind lens-free microscopic analysis of 116 cerebrospinal fluid specimens, including six cases of microbiology confirmed infectious meningitis, yielded a 100% sensitivity and a 79% specificity. Adapted lens-free microscopy is thus emerging as an operator-independent technique for the rapid numeration of leukocytes and erythrocytes in cerebrospinal fluid. In particular, this technique is well suited to the rapid diagnosis of meningitis at point-of-care laboratories.

  4. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    NASA Astrophysics Data System (ADS)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  5. Counting malaria parasites with a two-stage EM based algorithm using crowsourced data.

    PubMed

    Cabrera-Bean, Margarita; Pages-Zamora, Alba; Diaz-Vilor, Carles; Postigo-Camps, Maria; Cuadrado-Sanchez, Daniel; Luengo-Oroz, Miguel Angel

    2017-07-01

    Malaria eradication of the worldwide is currently one of the main WHO's global goals. In this work, we focus on the use of human-machine interaction strategies for low-cost fast reliable malaria diagnostic based on a crowdsourced approach. The addressed technical problem consists in detecting spots in images even under very harsh conditions when positive objects are very similar to some artifacts. The clicks or tags delivered by several annotators labeling an image are modeled as a robust finite mixture, and techniques based on the Expectation-Maximization (EM) algorithm are proposed for accurately counting malaria parasites on thick blood smears obtained by microscopic Giemsa-stained techniques. This approach outperforms other traditional methods as it is shown through experimentation with real data.

  6. On global optimization using an estimate of Lipschitz constant and simplicial partition

    NASA Astrophysics Data System (ADS)

    Gimbutas, Albertas; Žilinskas, Antanas

    2016-10-01

    A new algorithm is proposed for finding the global minimum of a multi-variate black-box Lipschitz function with an unknown Lipschitz constant. The feasible region is initially partitioned into simplices; in the subsequent iteration, the most suitable simplices are selected and bisected via the middle point of the longest edge. The suitability of a simplex for bisection is evaluated by minimizing of a surrogate function which mimics the lower bound for the considered objective function over that simplex. The surrogate function is defined using an estimate of the Lipschitz constant and the objective function values at the vertices of a simplex. The novelty of the algorithm is the sophisticated method of estimating the Lipschitz constant, and the appropriate method to minimize the surrogate function. The proposed algorithm was tested using 600 random test problems of different complexity, showing competitive results with two popular advanced algorithms which are based on similar assumptions.

  7. Detecting P and S-wave of Mt. Rinjani seismic based on a locally stationary autoregressive (LSAR) model

    NASA Astrophysics Data System (ADS)

    Nurhaida, Subanar, Abdurakhman, Abadi, Agus Maman

    2017-08-01

    Seismic data is usually modelled using autoregressive processes. The aim of this paper is to find the arrival times of the seismic waves of Mt. Rinjani in Indonesia. Kitagawa algorithm's is used to detect the seismic P and S-wave. Householder transformation used in the algorithm made it effectively finding the number of change points and parameters of the autoregressive models. The results show that the use of Box-Cox transformation on the variable selection level makes the algorithm works well in detecting the change points. Furthermore, when the basic span of the subinterval is set 200 seconds and the maximum AR order is 20, there are 8 change points which occur at 1601, 2001, 7401, 7601,7801, 8001, 8201 and 9601. Finally, The P and S-wave arrival times are detected at time 1671 and 2045 respectively using a precise detection algorithm.

  8. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  9. Technical note: Application of the Box-Cox data transformation to animal science experiments.

    PubMed

    Peltier, M R; Wilcox, C J; Sharp, D C

    1998-03-01

    In the use of ANOVA for hypothesis testing in animal science experiments, the assumption of homogeneity of errors often is violated because of scale effects and the nature of the measurements. We demonstrate a method for transforming data so that the assumptions of ANOVA are met (or violated to a lesser degree) and apply it in analysis of data from a physiology experiment. Our study examined whether melatonin implantation would affect progesterone secretion in cycling pony mares. Overall treatment variances were greater in the melatonin-treated group, and several common transformation procedures failed. Application of the Box-Cox transformation algorithm reduced the heterogeneity of error and permitted the assumption of equal variance to be met.

  10. On-line algorithms for forecasting hourly loads of an electric utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vemuri, S.; Huang, W.L.; Nelson, D.J.

    A method that lends itself to on-line forecasting of hourly electric loads is presented, and the results of its use are compared to models developed using the Box-Jenkins method. The method consits of processing the historical hourly loads with a sequential least-squares estimator to identify a finite-order autoregressive model which, in turn, is used to obtain a parsimonious autoregressive-moving average model. The method presented has several advantages in comparison with the Box-Jenkins method including much-less human intervention, improved model identification, and better results. The method is also more robust in that greater confidence can be placed in the accuracy ofmore » models based upon the various measures available at the identification stage.« less

  11. Thermodynamic properties of solvated peptides from selective integrated tempering sampling with a new weighting factor estimation algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Lin; Xie, Liangxu; Yang, Mingjun

    2017-04-01

    Conformational sampling under rugged energy landscape is always a challenge in computer simulations. The recently developed integrated tempering sampling, together with its selective variant (SITS), emerges to be a powerful tool in exploring the free energy landscape or functional motions of various systems. The estimation of weighting factors constitutes a critical step in these methods and requires accurate calculation of partition function ratio between different thermodynamic states. In this work, we propose a new adaptive update algorithm to compute the weighting factors based on the weighted histogram analysis method (WHAM). The adaptive-WHAM algorithm with SITS is then applied to study the thermodynamic properties of several representative peptide systems solvated in an explicit water box. The performance of the new algorithm is validated in simulations of these solvated peptide systems. We anticipate more applications of this coupled optimisation and production algorithm to other complicated systems such as the biochemical reactions in solution.

  12. ABCluster: the artificial bee colony algorithm for cluster global optimization.

    PubMed

    Zhang, Jun; Dolg, Michael

    2015-10-07

    Global optimization of cluster geometries is of fundamental importance in chemistry and an interesting problem in applied mathematics. In this work, we introduce a relatively new swarm intelligence algorithm, i.e. the artificial bee colony (ABC) algorithm proposed in 2005, to this field. It is inspired by the foraging behavior of a bee colony, and only three parameters are needed to control it. We applied it to several potential functions of quite different nature, i.e., the Coulomb-Born-Mayer, Lennard-Jones, Morse, Z and Gupta potentials. The benchmarks reveal that for long-ranged potentials the ABC algorithm is very efficient in locating the global minimum, while for short-ranged ones it is sometimes trapped into a local minimum funnel on a potential energy surface of large clusters. We have released an efficient, user-friendly, and free program "ABCluster" to realize the ABC algorithm. It is a black-box program for non-experts as well as experts and might become a useful tool for chemists to study clusters.

  13. Validation of the sperm class analyser CASA system for sperm counting in a busy diagnostic semen analysis laboratory.

    PubMed

    Dearing, Chey G; Kilburn, Sally; Lindsay, Kevin S

    2014-03-01

    Sperm counts have been linked to several fertility outcomes making them an essential parameter of semen analysis. It has become increasingly recognised that Computer-Assisted Semen Analysis (CASA) provides improved precision over manual methods but that systems are seldom validated robustly for use. The objective of this study was to gather the evidence to validate or reject the Sperm Class Analyser (SCA) as a tool for routine sperm counting in a busy laboratory setting. The criteria examined were comparison with the Improved Neubauer and Leja 20-μm chambers, within and between field precision, sperm concentration linearity from a stock diluted in semen and media, accuracy against internal and external quality material, assessment of uneven flow effects and a receiver operating characteristic (ROC) analysis to predict fertility in comparison with the Neubauer method. This work demonstrates that SCA CASA technology is not a standalone 'black box', but rather a tool for well-trained staff that allows rapid, high-number sperm counting providing errors are identified and corrected. The system will produce accurate, linear, precise results, with less analytical variance than manual methods that correlate well against the Improved Neubauer chamber. The system provides superior predictive potential for diagnosing fertility problems.

  14. A Method Based on Wavelet Transforms for Source Detection in Photon-counting Detector Images. II. Application to ROSAT PSPC Images

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1997-07-01

    We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.

  15. A comprehensive comparison of simple step counting techniques using wrist- and ankle-mounted accelerometer and gyroscope signals.

    PubMed

    Rhudy, Matthew B; Mahoney, Joseph M

    2018-04-01

    The goal of this work is to compare the differences between various step counting algorithms using both accelerometer and gyroscope measurements from wrist and ankle-mounted sensors. Participants completed four different conditions on a treadmill while wearing an accelerometer and gyroscope on the wrist and the ankle. Three different step counting techniques were applied to the data from each sensor type and mounting location. It was determined that using gyroscope measurements allowed for better performance than the typically used accelerometers, and that ankle-mounted sensors provided better performance than those mounted on the wrist.

  16. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  17. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  18. Comparative study on fractal analysis of interferometry images with application to tear film surface quality assessment.

    PubMed

    Szyperski, Piotr D

    2018-06-01

    The purpose of this research was to evaluate the applicability of the fractal dimension (FD) estimators to assess lateral shearing interferometric (LSI) measurements of tear film surface quality. Retrospective recordings of tear film measured with LSI were used: 69 from healthy subjects and 41 from patients diagnosed with dry eye syndrome. Five surface quality descriptors were considered, four based on FD and a previously reported descriptor operating in a spatial frequency domain (M 2 ), presenting temporal kinetics of post-blink tear film. A set of 12 regression parameters has been extracted and analyzed for classification purposes. The classifiers are assessed in terms of receiver operating characteristics and areas under their curves (AUC). Also, the computational loads are estimated. The maximum AUC of 82.4% was achieved for M 2 , closely followed by the binary box-counting (BBC) FD estimator with AUC=78.6%. For all descriptors, statistically significant differences between the subject groups were found (p<0.05). The BBC FD estimator was characterized with the highest empirical computational efficiency that was about 30% faster than that of M 2 , while that based on the differential box-counting exhibited the lowest efficiency (4.5 times slower than the best one). Concluding, FD estimators can be utilized for quantitative assessment of tear film kinetics. They provide a viable alternative to previously used spectral counter parameters, and at the same time allow higher computational efficiency.

  19. Automatic vehicle counting system for traffic monitoring

    NASA Astrophysics Data System (ADS)

    Crouzil, Alain; Khoudour, Louahdi; Valiere, Paul; Truong Cong, Dung Nghy

    2016-09-01

    The article is dedicated to the presentation of a vision-based system for road vehicle counting and classification. The system is able to achieve counting with a very good accuracy even in difficult scenarios linked to occlusions and/or presence of shadows. The principle of the system is to use already installed cameras in road networks without any additional calibration procedure. We propose a robust segmentation algorithm that detects foreground pixels corresponding to moving vehicles. First, the approach models each pixel of the background with an adaptive Gaussian distribution. This model is coupled with a motion detection procedure, which allows correctly location of moving vehicles in space and time. The nature of trials carried out, including peak periods and various vehicle types, leads to an increase of occlusions between cars and between cars and trucks. A specific method for severe occlusion detection, based on the notion of solidity, has been carried out and tested. Furthermore, the method developed in this work is capable of managing shadows with high resolution. The related algorithm has been tested and compared to a classical method. Experimental results based on four large datasets show that our method can count and classify vehicles in real time with a high level of performance (>98%) under different environmental situations, thus performing better than the conventional inductive loop detectors.

  20. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klumpp, John

    We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less

  1. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    PubMed Central

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-01-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95–98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management. PMID:28338047

  2. Automated detection and enumeration of marine wildlife using unmanned aircraft systems (UAS) and thermal imagery

    NASA Astrophysics Data System (ADS)

    Seymour, A. C.; Dale, J.; Hammill, M.; Halpin, P. N.; Johnston, D. W.

    2017-03-01

    Estimating animal populations is critical for wildlife management. Aerial surveys are used for generating population estimates, but can be hampered by cost, logistical complexity, and human risk. Additionally, human counts of organisms in aerial imagery can be tedious and subjective. Automated approaches show promise, but can be constrained by long setup times and difficulty discriminating animals in aggregations. We combine unmanned aircraft systems (UAS), thermal imagery and computer vision to improve traditional wildlife survey methods. During spring 2015, we flew fixed-wing UAS equipped with thermal sensors, imaging two grey seal (Halichoerus grypus) breeding colonies in eastern Canada. Human analysts counted and classified individual seals in imagery manually. Concurrently, an automated classification and detection algorithm discriminated seals based upon temperature, size, and shape of thermal signatures. Automated counts were within 95-98% of human estimates; at Saddle Island, the model estimated 894 seals compared to analyst counts of 913, and at Hay Island estimated 2188 seals compared to analysts’ 2311. The algorithm improves upon shortcomings of computer vision by effectively recognizing seals in aggregations while keeping model setup time minimal. Our study illustrates how UAS, thermal imagery, and automated detection can be combined to efficiently collect population data critical to wildlife management.

  3. Accelerated Dimension-Independent Adaptive Metropolis

    DOE PAGES

    Chen, Yuxin; Keyes, David E.; Law, Kody J.; ...

    2016-10-27

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  4. Accelerated Dimension-Independent Adaptive Metropolis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuxin; Keyes, David E.; Law, Kody J.

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  5. Complementary frame reconstruction: a low-biased dynamic PET technique for low count density data in projection space

    NASA Astrophysics Data System (ADS)

    Hong, Inki; Cho, Sanghee; Michel, Christian J.; Casey, Michael E.; Schaefferkoetter, Joshua D.

    2014-09-01

    A new data handling method is presented for improving the image noise distribution and reducing bias when reconstructing very short frames from low count dynamic PET acquisition. The new method termed ‘Complementary Frame Reconstruction’ (CFR) involves the indirect formation of a count-limited emission image in a short frame through subtraction of two frames with longer acquisition time, where the short time frame data is excluded from the second long frame data before the reconstruction. This approach can be regarded as an alternative to the AML algorithm recently proposed by Nuyts et al, as a method to reduce the bias for the maximum likelihood expectation maximization (MLEM) reconstruction of count limited data. CFR uses long scan emission data to stabilize the reconstruction and avoids modification of algorithms such as MLEM. The subtraction between two long frame images, naturally allows negative voxel values and significantly reduces bias introduced in the final image. Simulations based on phantom and clinical data were used to evaluate the accuracy of the reconstructed images to represent the true activity distribution. Applicability to determine the arterial input function in human and small animal studies is also explored. In situations with limited count rate, e.g. pediatric applications, gated abdominal, cardiac studies, etc., or when using limited doses of short-lived isotopes such as 15O-water, the proposed method will likely be preferred over independent frame reconstruction to address bias and noise issues.

  6. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  7. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    PubMed

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.

  8. The Prediction of the Gas Utilization Ratio Based on TS Fuzzy Neural Network and Particle Swarm Optimization

    PubMed Central

    Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong

    2018-01-01

    Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control. PMID:29461469

  9. The Prediction of the Gas Utilization Ratio based on TS Fuzzy Neural Network and Particle Swarm Optimization.

    PubMed

    Zhang, Sen; Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong

    2018-02-20

    Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control.

  10. Longevity and viability of Taenia solium eggs in the digestive system of the beetle Ammophorus rubripes.

    PubMed

    Gomez-Puerta, Luis Antonio; Lopez-Urbina, Maria Teresa; Garcia, Hector Hugo; Gonzalez, Armando Emiliano

    2014-03-01

    The present study evaluated the capacity of Ammophorus rubripes beetles to carry Taenia solium eggs, in terms of duration and viability of eggs in their digestive system. One hundred beetles were distributed into five polyethylene boxes, and then they were infected with T. solium eggs. Gravid proglottids of T. solium were crushed and then mixed with cattle feces. One gram of this mixture was placed in each box for 24 hours, after which each group of beetles was transferred into a new clean box. Then, five beetles were dissected every three days. Time was strongly associated with viability (r=0.89; P<0.001) and the calculated time to cero viability is 36 days. The eggs in the intestinal system of each beetle were counted and tested for viability. Taenia solium eggs were present in the beetle's digestive system for up to 39 days (13th sampling day out of 20), gradually reducing in numbers and viability, which was 0 on day 36 post-infection. Egg viability was around 40% up to day 24 post-infection, with a median number of eggs of 11 per beetle at this time. Dung beetles may potentially contribute towards dispersing T. solium eggs in endemic areas.

  11. The morphology and classification of α ganglion cells in the rat retinae: a fractal analysis study.

    PubMed

    Jelinek, Herbert F; Ristanović, Dušan; Milošević, Nebojša T

    2011-09-30

    Rat retinal ganglion cells have been proposed to consist of a varying number of subtypes. Dendritic morphology is an essential aspect of classification and a necessary step toward understanding structure-function relationships of retinal ganglion cells. This study aimed at using a heuristic classification procedure in combination with the box-counting analysis to classify the alpha ganglion cells in the rat retinae based on the dendritic branching pattern and to investigate morphological changes with retinal eccentricity. The cells could be divided into two groups: cells with simple dendritic pattern (box dimension lower than 1.390) and cells with complex dendritic pattern (box dimension higher than 1.390) according to their dendritic branching pattern complexity. Both were further divided into two subtypes due to the stratification within the inner plexiform layer. In the present study we have shown that the alpha rat RCGs can be classified further by their dendritic branching complexity and thus extend those of previous reports that fractal analysis can be successfully used in neuronal classification, particularly that the fractal dimension represents a robust and sensitive tool for the classification of retinal ganglion cells. A hypothesis of possible functional significance of our classification scheme is also discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. LONGEVITY AND VIABILITY OF Taenia solium EGGS IN THE DIGESTIVE SYSTEM OF THE BEETLE Ammophorus rubripes

    PubMed Central

    Gomez-Puerta, Luis Antonio; Lopez-Urbina, Maria Teresa; Garcia, Hector Hugo; Gonzalez, Armando Emiliano

    2015-01-01

    The present study evaluated the capacity of Ammophorus rubripes beetles to carry Taenia solium eggs, in terms of duration and viability of eggs in their digestive system. One hundred beetles were distributed into five polyethylene boxes, and then they were infected with T. solium eggs. Gravid proglottids of T. solium were crushed and then mixed with cattle feces. One gram of this mixture was placed in each box for 24 hours, after which each group of beetles was transferred into a new clean box. Then, five beetles were dissected every three days. Time was strongly associated with viability (r=0.89; P<0.001) and the calculated time to cero viability is 36 days. The eggs in the intestinal system of each beetle were counted and tested for viability. Taenia solium eggs were present in the beetle’s digestive system for up to 39 days (13th sampling day out of 20), gradually reducing in numbers and viability, which was 0 on day 36 post-infection. Egg viability was around 40% up to day 24 post-infection, with a median number of eggs of 11 per beetle at this time. Dung beetles may potentially contribute towards dispersing T. solium eggs in endemic areas. PMID:24728368

  13. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    PubMed

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to < 0.0001). Methods for assessing inflammation suggested a progression through the tubulointerstitial ACR grades, with statistically different results in borderline versus other ACR types, in all but the custom methods. Assessment of CD3-stained slides using various open source image analysis algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  14. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE PAGES

    Lockhart, M.; Henzlova, D.; Croft, S.; ...

    2017-09-20

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  15. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, M.; Henzlova, D.; Croft, S.

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  16. Development of a compact and cost effective multi-input digital signal processing system

    NASA Astrophysics Data System (ADS)

    Darvish-Molla, Sahar; Chin, Kenrick; Prestwich, William V.; Byun, Soo Hyun

    2018-01-01

    A prototype digital signal processing system (DSP) was developed using a microcontroller interfaced with a 12-bit sampling ADC, which offers a considerably inexpensive solution for processing multiple detectors with high throughput. After digitization of the incoming pulses, in order to maximize the output counting rate, a simple algorithm was employed for pulse height analysis. Moreover, an algorithm aiming at the real-time pulse pile-up deconvolution was implemented. The system was tested using a NaI(Tl) detector in comparison with a traditional analogue and commercial digital systems for a variety of count rates. The performance of the prototype system was consistently superior to the analogue and the commercial digital systems up to the input count rate of 61 kcps while was slightly inferior to the commercial digital system but still superior to the analogue system in the higher input rates. Considering overall cost, size and flexibility, this custom made multi-input digital signal processing system (MMI-DSP) was the best reliable choice for the purpose of the 2D microdosimetric data collection, or for any measurement in which simultaneous multi-data collection is required.

  17. Silicon Photomultiplier Characterization for sPHENIX Calorimeters

    NASA Astrophysics Data System (ADS)

    Tanner, Meghan; Skoby, Michael; Aidala, Christine; Sphenix Collaboration

    2016-09-01

    Silicon photomultipliers (SiPMs) are preferable to photomultiplier tubes due to their small size, insensitivity to magnetic fields, low operating voltage, and capability of detecting single photons. The sPHENIX collaboration at RHIC will use SiPMs in their proposed electromagnetic and hadronic calorimeters. The University of Michigan is assembling and implementing a test stand to characterize the dark count rate, temperature dependence, gain, and photon detection efficiency of SiPMs. To more accurately determine the dark count rate, we have constructed a light tight box to isolate the SiPM, which surrounds an electronics enclosure that protects the SiPM circuitry, and installed software to record the output signals. With this system, we will begin to collect data and optimize the system to test arrays of SiPMs instead of single devices as the proposed calorimeters will require testing approximately 115,000 SiPMs.

  18. Biophoton emission from fingernails and fingerprints of living human subjects.

    PubMed

    Kim, Tae Jin; Nam, Kyung Woon; Shin, Hak-Soo; Lee, Sung Muk; Yang, Jong Soo; Soh, Kwang-Sup

    2002-01-01

    Biophotons emitted from the center of fingernails and fingerprints from living humans are measured for twenty healthy subjects. We devised a dark box with a photo multiplier tube (H6180-01, Hamamatsu, Japan) whose spectral range is 300 nm to approximately 650 nm and a mount with a light-receiving hole of diameter 8 mm such that biophotons from the small circular area of nail or print of each finger are detected. Significantly more biophotons are emitted from fingernail than fingerprint for each finger of every subject. For thumb the average biophoton emission rate is 23.0 +/- 4.5 counts per second, and 17.2 +/- 2.0 counts per second from the nail, and print, respectively. There is a slight tendency that the little finger emits less than the other fingers. But some fingers emit far stronger than others, and it depends upon each individual subject which finger emits strongest.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.

    The Futility package contains the following: 1) Definition of the size of integers and real numbers; 2) A generic Unit test harness; 3) Definitions for some basic extensions to the Fortran language: arbitrary length strings, a parameter list construct, exception handlers, command line processor, timers; 4) Geometry definitions: point, line, plane, box, cylinder, polyhedron; 5) File wrapper functions: standard Fortran input/output files, Fortran binary files, HDF5 files; 6) Parallel wrapper functions: MPI, and Open MP abstraction layers, partitioning algorithms; 7) Math utilities: BLAS, Matrix and Vector definitions, Linear Solver methods and wrappers for other TPLs (PETSC, MKL, etc), preconditioner classes;more » 8) Misc: random number generator, water saturation properties, sorting algorithms.« less

  20. Region Based CNN for Foreign Object Debris Detection on Airfield Pavement

    PubMed Central

    Cao, Xiaoguang; Wang, Peng; Meng, Cai; Gong, Guoping; Liu, Miaoming; Qi, Jun

    2018-01-01

    In this paper, a novel algorithm based on convolutional neural network (CNN) is proposed to detect foreign object debris (FOD) based on optical imaging sensors. It contains two modules, the improved region proposal network (RPN) and spatial transformer network (STN) based CNN classifier. In the improved RPN, some extra select rules are designed and deployed to generate high quality candidates with fewer numbers. Moreover, the efficiency of CNN detector is significantly improved by introducing STN layer. Compared to faster R-CNN and single shot multiBox detector (SSD), the proposed algorithm achieves better result for FOD detection on airfield pavement in the experiment. PMID:29494524

  1. Region Based CNN for Foreign Object Debris Detection on Airfield Pavement.

    PubMed

    Cao, Xiaoguang; Wang, Peng; Meng, Cai; Bai, Xiangzhi; Gong, Guoping; Liu, Miaoming; Qi, Jun

    2018-03-01

    In this paper, a novel algorithm based on convolutional neural network (CNN) is proposed to detect foreign object debris (FOD) based on optical imaging sensors. It contains two modules, the improved region proposal network (RPN) and spatial transformer network (STN) based CNN classifier. In the improved RPN, some extra select rules are designed and deployed to generate high quality candidates with fewer numbers. Moreover, the efficiency of CNN detector is significantly improved by introducing STN layer. Compared to faster R-CNN and single shot multiBox detector (SSD), the proposed algorithm achieves better result for FOD detection on airfield pavement in the experiment.

  2. Detrending Algorithms in Large Time Series: Application to TFRM-PSES Data

    NASA Astrophysics Data System (ADS)

    del Ser, D.; Fors, O.; Núñez, J.; Voss, H.; Rosich, A.; Kouprianov, V.

    2015-07-01

    Certain instrumental effects and data reduction anomalies introduce systematic errors in photometric time series. Detrending algorithms such as the Trend Filtering Algorithm (TFA; Kovács et al. 2004) have played a key role in minimizing the effects caused by these systematics. Here we present the results obtained after applying the TFA, Savitzky & Golay (1964) detrending algorithms, and the Box Least Square phase-folding algorithm (Kovács et al. 2002) to the TFRM-PSES data (Fors et al. 2013). Tests performed on these data show that by applying these two filtering methods together the photometric RMS is on average improved by a factor of 3-4, with better efficiency towards brighter magnitudes, while applying TFA alone yields an improvement of a factor 1-2. As a result of this improvement, we are able to detect and analyze a large number of stars per TFRM-PSES field which present some kind of variability. Also, after porting these algorithms to Python and parallelizing them, we have improved, even for large data samples, the computational performance of the overall detrending+BLS algorithm by a factor of ˜10 with respect to Kovács et al. (2004).

  3. A gaze independent hybrid-BCI based on visual spatial attention

    NASA Astrophysics Data System (ADS)

    Egan, John M.; Loughnane, Gerard M.; Fletcher, Helen; Meade, Emma; Lalor, Edmund C.

    2017-08-01

    Objective. Brain-computer interfaces (BCI) use measures of brain activity to convey a user’s intent without the need for muscle movement. Hybrid designs, which use multiple measures of brain activity, have been shown to increase the accuracy of BCIs, including those based on EEG signals reflecting covert attention. Our study examined whether incorporating a measure of the P3 response improved the performance of a previously reported attention-based BCI design that incorporates measures of steady-state visual evoked potentials (SSVEP) and alpha band modulations. Approach. Subjects viewed stimuli consisting of two bi-laterally located flashing white boxes on a black background. Streams of letters were presented sequentially within the boxes, in random order. Subjects were cued to attend to one of the boxes without moving their eyes, and they were tasked with counting the number of target-letters that appeared within. P3 components evoked by target appearance, SSVEPs evoked by the flashing boxes, and power in the alpha band are modulated by covert attention, and the modulations can be used to classify trials as left-attended or right-attended. Main Results. We showed that classification accuracy was improved by including a P3 feature along with the SSVEP and alpha features (the inclusion of a P3 feature lead to a 9% increase in accuracy compared to the use of SSVEP and Alpha features alone). We also showed that the design improves the robustness of BCI performance to individual subject differences. Significance. These results demonstrate that incorporating multiple neurophysiological indices of covert attention can improve performance in a gaze-independent BCI.

  4. Quantifying K, U, and Th contents of marine sediments using shipboard natural gamma radiation spectra measured on DV JOIDES Resolution

    NASA Astrophysics Data System (ADS)

    De Vleeschouwer, David; Dunlea, Ann G.; Auer, Gerald; Anderson, Chloe H.; Brumsack, Hans; de Loach, Aaron; Gurnis, Michael; Huh, Youngsook; Ishiwa, Takeshige; Jang, Kwangchul; Kominz, Michelle A.; März, Christian; Schnetger, Bernhard; Murray, Richard W.; Pälike, Heiko

    2017-03-01

    During International Ocean Discovery Program (IODP) expeditions, shipboard-generated data provide the first insights into the cored sequences. The natural gamma radiation (NGR) of the recovered material, for example, is routinely measured on the ocean drilling research vessel DV JOIDES Resolution. At present, only total NGR counts are readily available as shipboard data, although full NGR spectra (counts as a function of gamma-ray energy level) are produced and archived. These spectra contain unexploited information, as one can estimate the sedimentary contents of potassium (K), thorium (Th), and uranium (U) from the characteristic gamma-ray energies of isotopes in the 40K, 232Th, and 238U radioactive decay series. Dunlea et al. (2013) quantified K, Th, and U contents in sediment from the South Pacific Gyre by integrating counts over specific energy levels of the NGR spectrum. However, the algorithm used in their study is unavailable to the wider scientific community due to commercial proprietary reasons. Here, we present a new MATLAB algorithm for the quantification of NGR spectra that is transparent and accessible to future NGR users. We demonstrate the algorithm's performance by comparing its results to shore-based inductively coupled plasma-mass spectrometry (ICP-MS), inductively coupled plasma-emission spectrometry (ICP-ES), and quantitative wavelength-dispersive X-ray fluorescence (XRF) analyses. Samples for these comparisons come from eleven sites (U1341, U1343, U1366-U1369, U1414, U1428-U1430, and U1463) cored in two oceans during five expeditions. In short, our algorithm rapidly produces detailed high-quality information on sediment properties during IODP expeditions at no extra cost.

  5. Traffic Vehicle Counting in Jam Flow Conditions Using Low-Cost and Energy-Efficient Wireless Magnetic Sensors.

    PubMed

    Bao, Xu; Li, Haijian; Xu, Dongwei; Jia, Limin; Ran, Bin; Rong, Jian

    2016-11-06

    The jam flow condition is one of the main traffic states in traffic flow theory and the most difficult state for sectional traffic information acquisition. Since traffic information acquisition is the basis for the application of an intelligent transportation system, research on traffic vehicle counting methods for the jam flow conditions has been worthwhile. A low-cost and energy-efficient type of multi-function wireless traffic magnetic sensor was designed and developed. Several advantages of the traffic magnetic sensor are that it is suitable for large-scale deployment and time-sustainable detection for traffic information acquisition. Based on the traffic magnetic sensor, a basic vehicle detection algorithm (DWVDA) with less computational complexity was introduced for vehicle counting in low traffic volume conditions. To improve the detection performance in jam flow conditions with a "tailgating effect" between front vehicles and rear vehicles, an improved vehicle detection algorithm (SA-DWVDA) was proposed and applied in field traffic environments. By deploying traffic magnetic sensor nodes in field traffic scenarios, two field experiments were conducted to test and verify the DWVDA and the SA-DWVDA algorithms. The experimental results have shown that both DWVDA and the SA-DWVDA algorithms yield a satisfactory performance in low traffic volume conditions (scenario I) and both of their mean absolute percent errors are less than 1% in this scenario. However, for jam flow conditions with heavy traffic volumes (scenario II), the SA-DWVDA was proven to achieve better results, and the mean absolute percent error of the SA-DWVDA is 2.54% with corresponding results of the DWVDA 7.07%. The results conclude that the proposed SA-DWVDA can implement efficient and accurate vehicle detection in jam flow conditions and can be employed in field traffic environments.

  6. Investigation of FPGA-Based Real-Time Adaptive Digital Pulse Shaping for High-Count-Rate Applications

    NASA Astrophysics Data System (ADS)

    Saxena, Shefali; Hawari, Ayman I.

    2017-07-01

    Digital signal processing techniques have been widely used in radiation spectrometry to provide improved stability and performance with compact physical size over the traditional analog signal processing. In this paper, field-programmable gate array (FPGA)-based adaptive digital pulse shaping techniques are investigated for real-time signal processing. National Instruments (NI) NI 5761 14-bit, 250-MS/s adaptor module is used for digitizing high-purity germanium (HPGe) detector's preamplifier pulses. Digital pulse processing algorithms are implemented on the NI PXIe-7975R reconfigurable FPGA (Kintex-7) using the LabVIEW FPGA module. Based on the time separation between successive input pulses, the adaptive shaping algorithm selects the optimum shaping parameters (rise time and flattop time of trapezoid-shaping filter) for each incoming signal. A digital Sallen-Key low-pass filter is implemented to enhance signal-to-noise ratio and reduce baseline drifting in trapezoid shaping. A recursive trapezoid-shaping filter algorithm is employed for pole-zero compensation of exponentially decayed (with two-decay constants) preamplifier pulses of an HPGe detector. It allows extraction of pulse height information at the beginning of each pulse, thereby reducing the pulse pileup and increasing throughput. The algorithms for RC-CR2 timing filter, baseline restoration, pile-up rejection, and pulse height determination are digitally implemented for radiation spectroscopy. Traditionally, at high-count-rate conditions, a shorter shaping time is preferred to achieve high throughput, which deteriorates energy resolution. In this paper, experimental results are presented for varying count-rate and pulse shaping conditions. Using adaptive shaping, increased throughput is accepted while preserving the energy resolution observed using the longer shaping times.

  7. Generating Within-Plant Spatial Distributions of an Insect Herbivore Based on Aggregation Patterns and Per-Node Infestation Probabilities.

    PubMed

    Rincon, Diego F; Hoy, Casey W; Cañas, Luis A

    2015-04-01

    Most predator-prey models extrapolate functional responses from small-scale experiments assuming spatially uniform within-plant predator-prey interactions. However, some predators focus their search in certain plant regions, and herbivores tend to select leaves to balance their nutrient uptake and exposure to plant defenses. Individual-based models that account for heterogeneous within-plant predator-prey interactions can be used to scale-up functional responses, but they would require the generation of explicit prey spatial distributions within-plant architecture models. The silverleaf whitefly, Bemisia tabaci biotype B (Gennadius) (Hemiptera: Aleyrodidae), is a significant pest of tomato crops worldwide that exhibits highly aggregated populations at several spatial scales, including within the plant. As part of an analytical framework to understand predator-silverleaf whitefly interactions, the objective of this research was to develop an algorithm to generate explicit spatial counts of silverleaf whitefly nymphs within tomato plants. The algorithm requires the plant size and the number of silverleaf whitefly individuals to distribute as inputs, and includes models that describe infestation probabilities per leaf nodal position and the aggregation pattern of the silverleaf whitefly within tomato plants and leaves. The output is a simulated number of silverleaf whitefly individuals for each leaf and leaflet on one or more plants. Parameter estimation was performed using nymph counts per leaflet censused from 30 artificially infested tomato plants. Validation revealed a substantial agreement between algorithm outputs and independent data that included the distribution of counts of both eggs and nymphs. This algorithm can be used in simulation models that explore the effect of local heterogeneity on whitefly-predator dynamics. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Algorithm for Detection of Ground and Canopy Cover in Micropulse Photon-Counting Lidar Altimeter Data in Preparation for the ICESat-2 Mission

    NASA Technical Reports Server (NTRS)

    Herzfeld, Ute Christina; McDonald, Brian W.; Neumann, Thomas Allen; Wallin, Bruce F.; Neumann, Thomas A.; Markus, Thorsten; Brenner, Anita; Field, Christopher

    2014-01-01

    NASA's Ice, Cloud and Land Elevation Satellite-II (ICESat-2) mission is a decadal survey mission (2016 launch). The mission objectives are to measure land ice elevation, sea ice freeboard, and changes in these variables, as well as to collect measurements over vegetation to facilitate canopy height determination. Two innovative components will characterize the ICESat-2 lidar: 1) collection of elevation data by a multibeam system and 2) application of micropulse lidar (photon-counting) technology. A photon-counting altimeter yields clouds of discrete points, resulting from returns of individual photons, and hence new data analysis techniques are required for elevation determination and association of the returned points to reflectors of interest. The objective of this paper is to derive an algorithm that allows detection of ground under dense canopy and identification of ground and canopy levels in simulated ICESat-2 data, based on airborne observations with a Sigma Space micropulse lidar. The mathematical algorithm uses spatial statistical and discrete mathematical concepts, including radial basis functions, density measures, geometrical anisotropy, eigenvectors, and geostatistical classification parameters and hyperparameters. Validation shows that ground and canopy elevation, and hence canopy height, can be expected to be observable with high accuracy by ICESat-2 for all expected beam energies considered for instrument design (93.01%-99.57% correctly selected points for a beam with expected return of 0.93 mean signals per shot (msp), and 72.85%-98.68% for 0.48 msp). The algorithm derived here is generally applicable for elevation determination from photoncounting lidar altimeter data collected over forested areas, land ice, sea ice, and land surfaces, as well as for cloud detection.

  9. Activity concentration measurements using a conjugate gradient (Siemens xSPECT) reconstruction algorithm in SPECT/CT.

    PubMed

    Armstrong, Ian S; Hoffmann, Sandra A

    2016-11-01

    The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately.

  10. Traffic Vehicle Counting in Jam Flow Conditions Using Low-Cost and Energy-Efficient Wireless Magnetic Sensors

    PubMed Central

    Bao, Xu; Li, Haijian; Xu, Dongwei; Jia, Limin; Ran, Bin; Rong, Jian

    2016-01-01

    The jam flow condition is one of the main traffic states in traffic flow theory and the most difficult state for sectional traffic information acquisition. Since traffic information acquisition is the basis for the application of an intelligent transportation system, research on traffic vehicle counting methods for the jam flow conditions has been worthwhile. A low-cost and energy-efficient type of multi-function wireless traffic magnetic sensor was designed and developed. Several advantages of the traffic magnetic sensor are that it is suitable for large-scale deployment and time-sustainable detection for traffic information acquisition. Based on the traffic magnetic sensor, a basic vehicle detection algorithm (DWVDA) with less computational complexity was introduced for vehicle counting in low traffic volume conditions. To improve the detection performance in jam flow conditions with a “tailgating effect” between front vehicles and rear vehicles, an improved vehicle detection algorithm (SA-DWVDA) was proposed and applied in field traffic environments. By deploying traffic magnetic sensor nodes in field traffic scenarios, two field experiments were conducted to test and verify the DWVDA and the SA-DWVDA algorithms. The experimental results have shown that both DWVDA and the SA-DWVDA algorithms yield a satisfactory performance in low traffic volume conditions (scenario I) and both of their mean absolute percent errors are less than 1% in this scenario. However, for jam flow conditions with heavy traffic volumes (scenario II), the SA-DWVDA was proven to achieve better results, and the mean absolute percent error of the SA-DWVDA is 2.54% with corresponding results of the DWVDA 7.07%. The results conclude that the proposed SA-DWVDA can implement efficient and accurate vehicle detection in jam flow conditions and can be employed in field traffic environments. PMID:27827974

  11. Syndromic Surveillance Using Veterinary Laboratory Data: Algorithm Combination and Customization of Alerts

    PubMed Central

    Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Sanchez, Javier; Revie, Crawford W.

    2013-01-01

    Background Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. Methods This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. Results The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. Conclusion The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes. PMID:24349216

  12. Syndromic surveillance using veterinary laboratory data: algorithm combination and customization of alerts.

    PubMed

    Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Sanchez, Javier; Revie, Crawford W

    2013-01-01

    Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Visco, Donald Patrick, Jr.; Faulon, Jean-Loup Michel; Roe, Diana C.

    This report is a comprehensive review of the field of molecular enumeration from early isomer counting theories to evolutionary algorithms that design molecules in silico. The core of the review is a detail account on how molecules are counted, enumerated, and sampled. The practical applications of molecular enumeration are also reviewed for chemical information, structure elucidation, molecular design, and combinatorial library design purposes. This review is to appear as a chapter in Reviews in Computational Chemistry volume 21 edited by Kenny B. Lipkowitz.

  14. Walking through the statistical black boxes of plant breeding.

    PubMed

    Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin

    2016-10-01

    The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.

  15. Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin

    NASA Astrophysics Data System (ADS)

    Otiefy, R. A. H.; Negm, H. M.

    2010-12-01

    The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.

  16. Analytic and Observational Approaches to Spacecraft Auroral Charging.

    DTIC Science & Technology

    1986-11-01

    PERSONAL AUTHOR(S) T. G. Barker 1 TYPE 9F REPORT 13b. TIME COVERED 14 DATE OF REPORT (Year. Month, Day) S PAGE COUNT ~ Cientific No. 1 1FROM _____TO...A Division of Maxwell Laboratories P.O. Box 1620 La Jolla, CA 92038 DTIC ELECTE November 1986 S JUN1 97D j D Scientific Report No. 1 Approved for...MASSACHUSETTS 01731 8-7 CLASSI0 F I AG REPORT DOCUMENTATION PAGE IsREORT SECURITY CLASSIFICATION lb RESTRICTIVE NMARK;NGS- ’J. ’ !"’.SSiFIED

  17. Preparation of X-ray astronomy satellite experiment Development of computer programs for the Salyut-HEXE X-ray experiment ground station

    NASA Astrophysics Data System (ADS)

    Petrik, J.

    The engineering model of the Salyut-HEXE experiment is described. The detector system, electronics box, and ground station are addressed. The microprocessor system is considered, discussing the cards and presenting block diagrams of their functions. The telemetry is examined, including the various modes and the direct and indirect transmission modes. The ground station programs are discussed, including the tasks, program development, input and output programs, status, power supply, count rates, telemetry dump, hard copy, and checksum.

  18. Majorana Zero-Energy Mode and Fractal Structure in Fibonacci-Kitaev Chain

    NASA Astrophysics Data System (ADS)

    Ghadimi, Rasoul; Sugimoto, Takanori; Tohyama, Takami

    2017-11-01

    We theoretically study a Kitaev chain with a quasiperiodic potential, where the quasiperiodicity is introduced by a Fibonacci sequence. Based on an analysis of the Majorana zero-energy mode, we find the critical p-wave superconducting pairing potential separating a topological phase and a non-topological phase. The topological phase diagram with respect to Fibonacci potentials follow a self-similar fractal structure characterized by the box-counting dimension, which is an example of the interplay of fractal and topology like the Hofstadter's butterfly in quantum Hall insulators.

  19. An Evaluation of Characteristics Contributing towards Ease of User- Computer Interface in a Computer-Aided Instruction Exercise

    DTIC Science & Technology

    1987-12-31

    Kent E., Hamel, Cheryl J., and Shrestha, Lisa B. 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15 PAGE COUNT Final FROM...DTIC USERS UNCLASSIFIED 22a NAME OF RESPONSIBLE INDIVIDUAL 22b TELEPHONE (Include Area Code) 22c OFFICE SYMBOL Cheryl J. Hamel 407-380-4825 Code 712 DO...Lab ATTN: Dr Alva Bittner, Jr., P. 0. Box 29407 New Orleans, LA 70189 Commanding Officer NETPMSA ATTN: Mr Dennis Knott Pensacola, FL 32509-5000

  20. Characterizing isolated attosecond pulses with angular streaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.

    Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  1. Characterizing isolated attosecond pulses with angular streaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.

    We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  2. Characterizing isolated attosecond pulses with angular streaking

    DOE PAGES

    Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.; ...

    2018-02-12

    Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  3. Characterizing isolated attosecond pulses with angular streaking

    DOE PAGES

    Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.; ...

    2018-02-13

    We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  4. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.

  5. Refining comparative proteomics by spectral counting to account for shared peptides and multiple search engines

    PubMed Central

    Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J.; Li, Ming

    2013-01-01

    Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables. PMID:22552787

  6. Refining comparative proteomics by spectral counting to account for shared peptides and multiple search engines.

    PubMed

    Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J; Li, Ming; Tabb, David L

    2012-09-01

    Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables.

  7. Secure Automated Microgrid Energy System (SAMES)

    DTIC Science & Technology

    2016-12-01

    with embedded algorithm to share power between each other; • Wind Turbine (WT) Simulator, max 80 kW (4×20 kW), 480 V, Running Wind Generation...Temp, Rain, Wind ........................ 39 Figure 22. Point Loma, Box and Whisker Plot, Hourly Sum of Consumption ............................ 40...Figure 27. Coronado, Consumption vs Average Daily SD Temp, Rainfall, Wind ....................... 44 Figure 28. Naval Base Point Loma, One Line, Solar

  8. Cryptanalysis of the Sodark Family of Cipher Algorithms

    DTIC Science & Technology

    2017-09-01

    software project for building three-bit LUT circuit representations of S- boxes is available as a GitHub repository [40]. It contains several improvements...DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The...second- and third-generation automatic link establishment (ALE) systems for high frequency radios. Radios utilizing ALE technology are in use by a

  9. Geometric Folding Algorithms: Bridging Theory to Practice

    DTIC Science & Technology

    2009-11-03

    orthogonal polyhedron can be folded from a single, universal crease pattern (box pleating). II. ORIGAMI DESIGN a.) Developed mathematical theory for what...happens in paper between creases, in particular for the case of circular creases. b.) Circular crease origami on permanent exhibition at MoMA in New...Developing mathematical theory of Robert Lang’s TreeMaker framework for efficiently folding tree-shaped origami bases.

  10. The Center for Nonlinear Phenomena and Magnetic Materials

    DTIC Science & Technology

    1992-09-30

    ORGANIZATION Howard University REPORT NUMBER ComSERCIWashington DC 20059 AFOSR- ,, ? 9 v 5 4 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10... University . Visualization - Improved Marching Cubes. January 27, 1992: Dr. Gerald Chachere, Math Dept., Howard University . "An algorithm for box...James Gates, Physics Department, Howard University . "Introduction to Strings Part I". February 5, 1992: Dr. James Gates, Physics Department, Howard

  11. Concurrent generation of multivariate mixed data with variables of dissimilar types.

    PubMed

    Amatya, Anup; Demirtas, Hakan

    2016-01-01

    Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.

  12. Objectifying when to halt a boxing match: a video analysis of fatalities.

    PubMed

    Miele, Vincent J; Bailes, Julian E

    2007-02-01

    Although numerous prestigious medical organizations have called for its abolishment, participation in the sport of boxing has reached an all-time high among both men and women, and its elimination is unlikely in the near future. Physicians should strive to increase boxing safety by improving the rules of competition, which have evolved minimally over the past two centuries. Currently, subjective criteria are used to determine whether or not a contest should be halted. Developing a standardized, objective method of determining when a contest should be halted would be a significant paradigm shift and could increase the safety of the sport's participants. This study analyzed the number and types of punches landed in a typical professional match, in bouts considered to be competitive and in those that ended in fatalities, to determine whether or not this would be a practical method of differentiating between these groups. Three groups of professional boxing matches were defined at the beginning of the study: 1) a "fatal" group, consisting of bouts that resulted in the death of a participant; 2) a "classic" group that represented competitive matches; and 3) a "control" group of 4000 professional boxing matches representing the average bout. A computer program known as Punchstat (Compubox, Inc., Manorville, NY) was used in the objective analysis of these matches via videotape playback. Several statistically significant differences were discovered between matches that resulted in fatalities and the control group. These include the number of punches landed per round, the number of power punches landed per round, and the number of power punches thrown per round by losing boxers. However, when the fatal bouts were compared with the most competitive bouts, these differences were no longer evident. Based on the data analyzed between the control and fatal-bout groups, a computerized method of counting landed blows at ringside could provide sufficient data to stop matches that might result in fatalities. However, such a process would become less effective as matches become more competitive, and implementing such a change would significantly decrease the competitive nature of the sport. Therefore, other methods of quantifying acceleration-deceleration brain injuries are necessary to improve the safety of boxing.

  13. An efficient and secure partial image encryption for wireless multimedia sensor networks using discrete wavelet transform, chaotic maps and substitution box

    NASA Astrophysics Data System (ADS)

    Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.

    2017-03-01

    Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.

  14. Photon Counting Using Edge-Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Gin, Jonathan W.; Nguyen, Danh H.; Farr, William H.

    2010-01-01

    New applications such as high-datarate, photon-starved, free-space optical communications require photon counting at flux rates into gigaphoton-per-second regimes coupled with subnanosecond timing accuracy. Current single-photon detectors that are capable of handling such operating conditions are designed in an array format and produce output pulses that span multiple sample times. In order to discern one pulse from another and not to overcount the number of incoming photons, a detection algorithm must be applied to the sampled detector output pulses. As flux rates increase, the ability to implement such a detection algorithm becomes difficult within a digital processor that may reside within a field-programmable gate array (FPGA). Systems have been developed and implemented to both characterize gigahertz bandwidth single-photon detectors, as well as process photon count signals at rates into gigaphotons per second in order to implement communications links at SCPPM (serial concatenated pulse position modulation) encoded data rates exceeding 100 megabits per second with efficiencies greater than two bits per detected photon. A hardware edge-detection algorithm and corresponding signal combining and deserialization hardware were developed to meet these requirements at sample rates up to 10 GHz. The photon discriminator deserializer hardware board accepts four inputs, which allows for the ability to take inputs from a quadphoton counting detector, to support requirements for optical tracking with a reduced number of hardware components. The four inputs are hardware leading-edge detected independently. After leading-edge detection, the resultant samples are ORed together prior to deserialization. The deserialization is performed to reduce the rate at which data is passed to a digital signal processor, perhaps residing within an FPGA. The hardware implements four separate analog inputs that are connected through RF connectors. Each analog input is fed to a high-speed 1-bit comparator, which digitizes the input referenced to an adjustable threshold value. This results in four independent serial sample streams of binary 1s and 0s, which are ORed together at rates up to 10 GHz. This single serial stream is then deserialized by a factor of 16 to create 16 signal lines at a rate of 622.5 MHz or lower for input to a high-speed digital processor assembly. The new design and corresponding hardware can be employed with a quad-photon counting detector capable of handling photon rates on the order of multi-gigaphotons per second, whereas prior art was only capable of handling a single input at 1/4 the flux rate. Additionally, the hardware edge-detection algorithm has provided the ability to process 3-10 higher photon flux rates than previously possible by removing the limitation that photoncounting detector output pulses on multiple channels being ORed not overlap. Now, only the leading edges of the pulses are required to not overlap. This new photon counting digitizer hardware architecture supports a universal front end for an optical communications receiver operating at data rates from kilobits to over one gigabit per second to meet increased mission data volume requirements.

  15. Theoretical and numerical studies of chaotic mixing

    NASA Astrophysics Data System (ADS)

    Kim, Ho Jun

    Theoretical and numerical studies of chaotic mixing are performed to circumvent the difficulties of efficient mixing, which come from the lack of turbulence in microfluidic devices. In order to carry out efficient and accurate parametric studies and to identify a fully chaotic state, a spectral element algorithm for solution of the incompressible Navier-Stokes and species transport equations is developed. Using Taylor series expansions in time marching, the new algorithm employs an algebraic factorization scheme on multi-dimensional staggered spectral element grids, and extends classical conforming Galerkin formulations to nonconforming spectral elements. Lagrangian particle tracking methods are utilized to study particle dispersion in the mixing device using spectral element and fourth order Runge-Kutta discretizations in space and time, respectively. Comparative studies of five different techniques commonly employed to identify the chaotic strength and mixing efficiency in microfluidic systems are presented to demonstrate the competitive advantages and shortcomings of each method. These are the stirring index based on the box counting method, Poincare sections, finite time Lyapunov exponents, the probability density function of the stretching field, and mixing index inverse, based on the standard deviation of scalar species distribution. Series of numerical simulations are performed by varying the Peclet number (Pe) at fixed kinematic conditions. The mixing length (lm) is characterized as function of the Pe number, and lm ∝ ln(Pe) scaling is demonstrated for fully chaotic cases. Employing the aforementioned techniques, optimum kinematic conditions and the actuation frequency of the stirrer that result in the highest mixing/stirring efficiency are identified in a zeta potential patterned straight micro channel, where a continuous flow is generated by superposition of a steady pressure driven flow and time periodic electroosmotic flow induced by a stream-wise AC electric field. Finally, it is shown that the invariant manifold of hyperbolic periodic point determines the geometry of fast mixing zones in oscillatory flows in two-dimensional cavity.

  16. A loop-counting method for covariate-corrected low-rank biclustering of gene-expression and genome-wide association study data.

    PubMed

    Rangan, Aaditya V; McGrouther, Caroline C; Kelsoe, John; Schork, Nicholas; Stahl, Eli; Zhu, Qian; Krishnan, Arjun; Yao, Vicky; Troyanskaya, Olga; Bilaloglu, Seda; Raghavan, Preeti; Bergen, Sarah; Jureus, Anders; Landen, Mikael

    2018-05-14

    A common goal in data-analysis is to sift through a large data-matrix and detect any significant submatrices (i.e., biclusters) that have a low numerical rank. We present a simple algorithm for tackling this biclustering problem. Our algorithm accumulates information about 2-by-2 submatrices (i.e., 'loops') within the data-matrix, and focuses on rows and columns of the data-matrix that participate in an abundance of low-rank loops. We demonstrate, through analysis and numerical-experiments, that this loop-counting method performs well in a variety of scenarios, outperforming simple spectral methods in many situations of interest. Another important feature of our method is that it can easily be modified to account for aspects of experimental design which commonly arise in practice. For example, our algorithm can be modified to correct for controls, categorical- and continuous-covariates, as well as sparsity within the data. We demonstrate these practical features with two examples; the first drawn from gene-expression analysis and the second drawn from a much larger genome-wide-association-study (GWAS).

  17. A splay tree-based approach for efficient resource location in P2P networks.

    PubMed

    Zhou, Wei; Tan, Zilong; Yao, Shaowen; Wang, Shipu

    2014-01-01

    Resource location in structured P2P system has a critical influence on the system performance. Existing analytical studies of Chord protocol have shown some potential improvements in performance. In this paper a splay tree-based new Chord structure called SChord is proposed to improve the efficiency of locating resources. We consider a novel implementation of the Chord finger table (routing table) based on the splay tree. This approach extends the Chord finger table with additional routing entries. Adaptive routing algorithm is proposed for implementation, and it can be shown that hop count is significantly minimized without introducing any other protocol overheads. We analyze the hop count of the adaptive routing algorithm, as compared to Chord variants, and demonstrate sharp upper and lower bounds for both worst-case and average case settings. In addition, we theoretically analyze the hop reducing in SChord and derive the fact that SChord can significantly reduce the routing hops as compared to Chord. Several simulations are presented to evaluate the performance of the algorithm and support our analytical findings. The simulation results show the efficiency of SChord.

  18. Adaptive Bloom Filter: A Space-Efficient Counting Algorithm for Unpredictable Network Traffic

    NASA Astrophysics Data System (ADS)

    Matsumoto, Yoshihide; Hazeyama, Hiroaki; Kadobayashi, Youki

    The Bloom Filter (BF), a space-and-time-efficient hashcoding method, is used as one of the fundamental modules in several network processing algorithms and applications such as route lookups, cache hits, packet classification, per-flow state management or network monitoring. BF is a simple space-efficient randomized data structure used to represent a data set in order to support membership queries. However, BF generates false positives, and cannot count the number of distinct elements. A counting Bloom Filter (CBF) can count the number of distinct elements, but CBF needs more space than BF. We propose an alternative data structure of CBF, and we called this structure an Adaptive Bloom Filter (ABF). Although ABF uses the same-sized bit-vector used in BF, the number of hash functions employed by ABF is dynamically changed to record the number of appearances of a each key element. Considering the hash collisions, the multiplicity of a each key element on ABF can be estimated from the number of hash functions used to decode the membership of the each key element. Although ABF can realize the same functionality as CBF, ABF requires the same memory size as BF. We describe the construction of ABF and IABF (Improved ABF), and provide a mathematical analysis and simulation using Zipf's distribution. Finally, we show that ABF can be used for an unpredictable data set such as real network traffic.

  19. Low-Power, 8-Channel EEG Recorder and Seizure Detector ASIC for a Subdermal Implantable System.

    PubMed

    Do Valle, Bruno G; Cash, Sydney S; Sodini, Charles G

    2016-12-01

    EEG remains the mainstay test for the diagnosis and treatment of patients with epilepsy. Unfortunately, ambulatory EEG systems are far from ideal for patients who have infrequent seizures. These systems only last up to 3 days and if a seizure is not captured during the recordings, a definite diagnosis of the patient's condition cannot be given. This work aims to address this need by proposing a subdermal implantable, eight-channel EEG recorder and seizure detector that has two modes of operation: diagnosis and seizure counting. In the diagnosis mode, EEG is continuously recorded until a number of seizures are recorded. In the seizure counting mode, the system uses a low-power algorithm to track the number of seizures a patient has, providing doctors with a reliable count to help determine medication efficacy or other clinical endpoint. An ASIC that implements the EEG recording and seizure detection algorithm was designed and fabricated in a 0.18 μm CMOS process. The ASIC includes eight EEG channels and is designed to minimize the system's power and size. The result is a power-efficient analog front end that requires 2.75 μW per channel in diagnosis mode and 0.84 μW per channel in seizure counting mode. Both modes have an input referred noise of approximately 1.1 μVrms.

  20. An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Yang, Chien-Ting; Shou, Yu-Wen; Shen, Tzu-Kuei

    2010-12-01

    We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM) for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors) based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System)—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4% ~ 10% for our three tested videos in the experimental results of vehicle counting.

  1. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  2. Oscillatory haematopoiesis in adults with sickle cell disease treated with hydroxycarbamide.

    PubMed

    Baird, John H; Minniti, Caterina P; Lee, Jung-Min; Tian, Xin; Wu, Colin; Jackson, Mary; Alam, Shoaib; Taylor, James G; Kato, Gregory J

    2015-03-01

    Hydroxycarbamide therapy has been associated with significant oscillations in peripheral blood counts from myeloid, lymphoid and erythroid lineages in patients with polycythaemia vera and chronic myeloid leukaemia. We retrospectively evaluated serial blood counts over an 8-year period from 44 adult patients with sickle cell disease receiving hydroxycarbamide. Platelet counts, leucocyte counts, haemoglobin values and reticulocyte counts, apportioned by hydroxycarbamide status, were analysed using a Lomb-Scargle periodogram algorithm. Significant periodicities were present in one or more counts in 38 patients receiving hydroxycarbamide for a mean duration of 4·81 years. Platelet and leucocyte counts oscillated in 56·8% and 52·3% of patients, respectively. These oscillations generally became detectable within days of initiating therapy. During hydroxycarbamide therapy, the predominant periods of oscillation were 27 ± 1 d for platelet counts and 15 ± 1 d for leucocyte counts. Despite an absolute decrease in leucocyte and platelet counts during hydroxycarbamide treatment, the amplitudes between nadirs and zeniths remained similar regardless of exposure. Our observations appear consistent with previously proposed models of cyclic haematopoiesis, and document that hydroxycarbamide-induced oscillations in blood counts are innocuous phenomena not limited to myeloproliferative disorders as described previously. We speculate the known cell cycle inhibitory properties of hydroxycarbamide may accentuate otherwise latent constitutive oscillatory haematopoiesis. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  3. A new parallelization scheme for adaptive mesh refinement

    DOE PAGES

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...

    2016-05-06

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  4. A new parallelization scheme for adaptive mesh refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  5. Spatial Pattern of Cell Damage in Tissue from Heavy Ions

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem L.; Huff, Janice L.; Cucinotta, Francis A.

    2007-01-01

    A new Monte Carlo algorithm was developed that can model passage of heavy ions in a tissue, and their action on the cellular matrix for 2- or 3-dimensional cases. The build-up of secondaries such as projectile fragments, target fragments, other light fragments, and delta-rays was simulated. Cells were modeled as a cell culture monolayer in one example, where the data were taken directly from microscopy (2-d cell matrix). A simple model of tissue was given as abstract spheres with close approximation to real cell geometries (3-d cell matrix), as well as a realistic model of tissue was proposed based on microscopy images. Image segmentation was used to identify cells in an irradiated cell culture monolayer, or slices of tissue. The cells were then inserted into the model box pixel by pixel. In the case of cell monolayers (2-d), the image size may exceed the modeled box size. Such image was is moved with respect to the box in order to sample as many cells as possible. In the case of the simple tissue (3-d), the tissue box is modeled with periodic boundary conditions, which extrapolate the technique to macroscopic volumes of tissue. For real tissue, specific spatial patterns for cell apoptosis and necrosis are expected. The cell patterns were modeled based on action cross sections for apoptosis and necrosis estimated based on BNL data, and other experimental data.

  6. POD Model Reconstruction for Gray-Box Fault Detection

    NASA Technical Reports Server (NTRS)

    Park, Han; Zak, Michail

    2007-01-01

    Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.

  7. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  8. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  9. On DESTINY Science Instrument Electrical and Electronics Subsystem Framework

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Benford, Dominic J.; Lauer, Tod R.

    2009-01-01

    Future space missions are going to require large focal planes with many sensing arrays and hundreds of millions of pixels all read out at high data rates'' . This will place unique demands on the electrical and electronics (EE) subsystem design and it will be critically important to have high technology readiness level (TRL) EE concepts ready to support such missions. One such omission is the Joint Dark Energy Mission (JDEM) charged with making precise measurements of the expansion rate of the universe to reveal vital clues about the nature of dark energy - a hypothetical form of energy that permeates all of space and tends to increase the rate of the expansion. One of three JDEM concept studies - the Dark Energy Space Telescope (DESTINY) was conducted in 2008 at the NASA's Goddard Space Flight Center (GSFC) in Greenbelt, Maryland. This paper presents the EE subsystem framework, which evolved from the DESTINY science instrument study. It describes the main challenges and implementation concepts related to the design of an EE subsystem featuring multiple focal planes populated with dozens of large arrays and millions of pixels. The focal planes are passively cooled to cryogenic temperatures (below 140 K). The sensor mosaic is controlled by a large number of Readout Integrated Circuits and Application Specific Integrated Circuits - the ROICs/ASICs in near proximity to their sensor focal planes. The ASICs, in turn, are serviced by a set of "warm" EE subsystem boxes performing Field Programmable Gate Array (FPGA) based digital signal processing (DSP) computations of complex algorithms, such as sampling-up-the-ramp algorithm (SUTR), over large volumes of fast data streams. The SUTR boxes are supported by the Instrument Control/Command and Data Handling box (ICDH Primary and Backup boxes) for lossless data compression, command and low volume telemetry handling, power conversion and for communications with the spacecraft. The paper outlines how the JDEM DESTINY concept instrument EE subsystem can be built now, a design; which is generally U.S. Government work not protected by U.S. copyright IEEEAC paper # 1429. Version 4. Updated October 19, 2009 applicable to a wide variety of missions using large focal planes with lar ge mosaics of sensors.

  10. Interobserver variability in identification of breast tumors in MRI and its implications for prognostic biomarkers and radiogenomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, Ashirbani, E-mail: as698@duke.edu; Grimm, La

    Purpose: To assess the interobserver variability of readers when outlining breast tumors in MRI, study the reasons behind the variability, and quantify the effect of the variability on algorithmic imaging features extracted from breast MRI. Methods: Four readers annotated breast tumors from the MRI examinations of 50 patients from one institution using a bounding box to indicate a tumor. All of the annotated tumors were biopsy proven cancers. The similarity of bounding boxes was analyzed using Dice coefficients. An automatic tumor segmentation algorithm was used to segment tumors from the readers’ annotations. The segmented tumors were then compared between readersmore » using Dice coefficients as the similarity metric. Cases showing high interobserver variability (average Dice coefficient <0.8) after segmentation were analyzed by a panel of radiologists to identify the reasons causing the low level of agreement. Furthermore, an imaging feature, quantifying tumor and breast tissue enhancement dynamics, was extracted from each segmented tumor for a patient. Pearson’s correlation coefficients were computed between the features for each pair of readers to assess the effect of the annotation on the feature values. Finally, the authors quantified the extent of variation in feature values caused by each of the individual reasons for low agreement. Results: The average agreement between readers in terms of the overlap (Dice coefficient) of the bounding box was 0.60. Automatic segmentation of tumor improved the average Dice coefficient for 92% of the cases to the average value of 0.77. The mean agreement between readers expressed by the correlation coefficient for the imaging feature was 0.96. Conclusions: There is a moderate variability between readers when identifying the rectangular outline of breast tumors on MRI. This variability is alleviated by the automatic segmentation of the tumors. Furthermore, the moderate interobserver variability in terms of the bounding box does not translate into a considerable variability in terms of assessment of enhancement dynamics. The authors propose some additional ways to further reduce the interobserver variability.« less

  11. Optimizations for the EcoPod field identification tool

    PubMed Central

    Manoharan, Aswath; Stamberger, Jeannie; Yu, YuanYuan; Paepcke, Andreas

    2008-01-01

    Background We sketch our species identification tool for palm sized computers that helps knowledgeable observers with census activities. An algorithm turns an identification matrix into a minimal length series of questions that guide the operator towards identification. Historic observation data from the census geographic area helps minimize question volume. We explore how much historic data is required to boost performance, and whether the use of history negatively impacts identification of rare species. We also explore how characteristics of the matrix interact with the algorithm, and how best to predict the probability of observing a previously unseen species. Results Point counts of birds taken at Stanford University's Jasper Ridge Biological Preserve between 2000 and 2005 were used to examine the algorithm. A computer identified species by correctly answering, and counting the algorithm's questions. We also explored how the character density of the key matrix and the theoretical minimum number of questions for each bird in the matrix influenced the algorithm. Our investigation of the required probability smoothing determined whether Laplace smoothing of observation probabilities was sufficient, or whether the more complex Good-Turing technique is required. Conclusion Historic data improved identification speed, but only impacted the top 25% most frequently observed birds. For rare birds the history based algorithms did not impose a noticeable penalty in the number of questions required for identification. For our dataset neither age of the historic data, nor the number of observation years impacted the algorithm. Density of characters for different taxa in the identification matrix did not impact the algorithms. Intrinsic differences in identifying different birds did affect the algorithm, but the differences affected the baseline method of not using historic data to exactly the same degree. We found that Laplace smoothing performed better for rare species than Simple Good-Turing, and that, contrary to expectation, the technique did not then adversely affect identification performance for frequently observed birds. PMID:18366649

  12. FMRQ-A Multiagent Reinforcement Learning Algorithm for Fully Cooperative Tasks.

    PubMed

    Zhang, Zhen; Zhao, Dongbin; Gao, Junwei; Wang, Dongqing; Dai, Yujie

    2017-06-01

    In this paper, we propose a multiagent reinforcement learning algorithm dealing with fully cooperative tasks. The algorithm is called frequency of the maximum reward Q-learning (FMRQ). FMRQ aims to achieve one of the optimal Nash equilibria so as to optimize the performance index in multiagent systems. The frequency of obtaining the highest global immediate reward instead of immediate reward is used as the reinforcement signal. With FMRQ each agent does not need the observation of the other agents' actions and only shares its state and reward at each step. We validate FMRQ through case studies of repeated games: four cases of two-player two-action and one case of three-player two-action. It is demonstrated that FMRQ can converge to one of the optimal Nash equilibria in these cases. Moreover, comparison experiments on tasks with multiple states and finite steps are conducted. One is box-pushing and the other one is distributed sensor network problem. Experimental results show that the proposed algorithm outperforms others with higher performance.

  13. Test-state approach to the quantum search problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehrawat, Arun; Nguyen, Le Huy; Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 117597

    2011-05-15

    The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These testmore » states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.« less

  14. Algorithm and Architecture Independent Benchmarking with SEAK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less

  15. Analysis of Resting-State fMRI Topological Graph Theory Properties in Methamphetamine Drug Users Applying Box-Counting Fractal Dimension.

    PubMed

    Siyah Mansoory, Meysam; Oghabian, Mohammad Ali; Jafari, Amir Homayoun; Shahbabaie, Alireza

    2017-01-01

    Graph theoretical analysis of functional Magnetic Resonance Imaging (fMRI) data has provided new measures of mapping human brain in vivo. Of all methods to measure the functional connectivity between regions, Linear Correlation (LC) calculation of activity time series of the brain regions as a linear measure is considered the most ubiquitous one. The strength of the dependence obligatory for graph construction and analysis is consistently underestimated by LC, because not all the bivariate distributions, but only the marginals are Gaussian. In a number of studies, Mutual Information (MI) has been employed, as a similarity measure between each two time series of the brain regions, a pure nonlinear measure. Owing to the complex fractal organization of the brain indicating self-similarity, more information on the brain can be revealed by fMRI Fractal Dimension (FD) analysis. In the present paper, Box-Counting Fractal Dimension (BCFD) is introduced for graph theoretical analysis of fMRI data in 17 methamphetamine drug users and 18 normal controls. Then, BCFD performance was evaluated compared to those of LC and MI methods. Moreover, the global topological graph properties of the brain networks inclusive of global efficiency, clustering coefficient and characteristic path length in addict subjects were investigated too. Compared to normal subjects by using statistical tests (P<0.05), topological graph properties were postulated to be disrupted significantly during the resting-state fMRI. Based on the results, analyzing the graph topological properties (representing the brain networks) based on BCFD is a more reliable method than LC and MI.

  16. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  17. Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices

    NASA Astrophysics Data System (ADS)

    Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.

    2017-08-01

    We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.

  18. Correlation of Fractal Dimension Values with Implant Insertion Torque and Resonance Frequency Values at Implant Recipient Sites.

    PubMed

    Suer, Berkay Tolga; Yaman, Zekai; Buyuksarac, Bora

    2016-01-01

    Fractal analysis is a mathematical method used to describe the internal architecture of complex structures such as trabecular bone. Fractal analysis of panoramic radiographs of implant recipient sites could help to predict the quality of the bone prior to implant placement. This study investigated the correlations between the fractal dimension values obtained from panoramic radiographs and the insertion torque and resonance frequency values of mandibular implants. Thirty patients who received a total of 55 implants of the same brand, diameter, and length in the mandibular premolar and molar regions were included in the study. The same surgical procedures were applied to each patient, and the insertion torque and resonance frequency values were recorded for each implant at the time of placement. The radiographic fractal dimensions of the alveolar bone in the implant recipient area were calculated from preoperative panoramic radiographs using a box-counting algorithm. The insertion torque and resonance frequency values were compared with the fractal dimension values using the Spearman test. All implants were successful, and none were lost during the follow-up period. Linear correlations were observed between the fractal dimension and resonance frequency, between the fractal dimension and insertion torque, and between resonance frequency and insertion torque. These results suggest that the noninvasive measurement of the fractal dimension from panoramic radiographs might help to predict the bone quality, and thus the primary stability of dental implants, before implant surgery.

  19. Downsampling Photodetector Array with Windowing

    NASA Technical Reports Server (NTRS)

    Patawaran, Ferze D.; Farr, William H.; Nguyen, Danh H.; Quirk, Kevin J.; Sahasrabudhe, Adit

    2012-01-01

    In a photon counting detector array, each pixel in the array produces an electrical pulse when an incident photon on that pixel is detected. Detection and demodulation of an optical communication signal that modulated the intensity of the optical signal requires counting the number of photon arrivals over a given interval. As the size of photon counting photodetector arrays increases, parallel processing of all the pixels exceeds the resources available in current application-specific integrated circuit (ASIC) and gate array (GA) technology; the desire for a high fill factor in avalanche photodiode (APD) detector arrays also precludes this. Through the use of downsampling and windowing portions of the detector array, the processing is distributed between the ASIC and GA. This allows demodulation of the optical communication signal incident on a large photon counting detector array, as well as providing architecture amenable to algorithmic changes. The detector array readout ASIC functions as a parallel-to-serial converter, serializing the photodetector array output for subsequent processing. Additional downsampling functionality for each pixel is added to this ASIC. Due to the large number of pixels in the array, the readout time of the entire photodetector is greater than the time between photon arrivals; therefore, a downsampling pre-processing step is done in order to increase the time allowed for the readout to occur. Each pixel drives a small counter that is incremented at every detected photon arrival or, equivalently, the charge in a storage capacitor is incremented. At the end of a user-configurable counting period (calculated independently from the ASIC), the counters are sampled and cleared. This downsampled photon count information is then sent one counter word at a time to the GA. For a large array, processing even the downsampled pixel counts exceeds the capabilities of the GA. Windowing of the array, whereby several subsets of pixels are designated for processing, is used to further reduce the computational requirements. The grouping of the designated pixel frame as the photon count information is sent one word at a time to the GA, the aggregation of the pixels in a window can be achieved by selecting only the designated pixel counts from the serial stream of photon counts, thereby obviating the need to store the entire frame of pixel count in the gate array. The pixel count se quence from each window can then be processed, forming lower-rate pixel statistics for each window. By having this processing occur in the GA rather than in the ASIC, future changes to the processing algorithm can be readily implemented. The high-bandwidth requirements of a photon counting array combined with the properties of the optical modulation being detected by the array present a unique problem that has not been addressed by current CCD or CMOS sensor array solutions.

  20. ChemTS: an efficient python library for de novo molecular generation.

    PubMed

    Yang, Xiufeng; Zhang, Jinzhe; Yoshizoe, Kazuki; Terayama, Kei; Tsuda, Koji

    2017-01-01

    Automatic design of organic materials requires black-box optimization in a vast chemical space. In conventional molecular design algorithms, a molecule is built as a combination of predetermined fragments. Recently, deep neural network models such as variational autoencoders and recurrent neural networks (RNNs) are shown to be effective in de novo design of molecules without any predetermined fragments. This paper presents a novel Python library ChemTS that explores the chemical space by combining Monte Carlo tree search and an RNN. In a benchmarking problem of optimizing the octanol-water partition coefficient and synthesizability, our algorithm showed superior efficiency in finding high-scoring molecules. ChemTS is available at https://github.com/tsudalab/ChemTS.

  1. ChemTS: an efficient python library for de novo molecular generation

    NASA Astrophysics Data System (ADS)

    Yang, Xiufeng; Zhang, Jinzhe; Yoshizoe, Kazuki; Terayama, Kei; Tsuda, Koji

    2017-12-01

    Automatic design of organic materials requires black-box optimization in a vast chemical space. In conventional molecular design algorithms, a molecule is built as a combination of predetermined fragments. Recently, deep neural network models such as variational autoencoders and recurrent neural networks (RNNs) are shown to be effective in de novo design of molecules without any predetermined fragments. This paper presents a novel Python library ChemTS that explores the chemical space by combining Monte Carlo tree search and an RNN. In a benchmarking problem of optimizing the octanol-water partition coefficient and synthesizability, our algorithm showed superior efficiency in finding high-scoring molecules. ChemTS is available at https://github.com/tsudalab/ChemTS.

  2. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  3. Angiogram, fundus, and oxygen saturation optic nerve head image fusion

    NASA Astrophysics Data System (ADS)

    Cao, Hua; Khoobehi, Bahram

    2009-02-01

    A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.

  4. Performance Assessment of the Optical Transient Detector and Lightning Imaging Sensor. Part 2; Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Christian, Hugh J.; Blakeslee, Richard; Boccippio, Dennis J.; Goodman, Steve J.; Boeck, William

    2006-01-01

    We describe the clustering algorithm used by the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) for combining the lightning pulse data into events, groups, flashes, and areas. Events are single pixels that exceed the LIS/OTD background level during a single frame (2 ms). Groups are clusters of events that occur within the same frame and in adjacent pixels. Flashes are clusters of groups that occur within 330 ms and either 5.5 km (for LIS) or 16.5 km (for OTD) of each other. Areas are clusters of flashes that occur within 16.5 km of each other. Many investigators are utilizing the LIS/OTD flash data; therefore, we test how variations in the algorithms for the event group and group-flash clustering affect the flash count for a subset of the LIS data. We divided the subset into areas with low (1-3), medium (4-15), high (16-63), and very high (64+) flashes to see how changes in the clustering parameters affect the flash rates in these different sizes of areas. We found that as long as the cluster parameters are within about a factor of two of the current values, the flash counts do not change by more than about 20%. Therefore, the flash clustering algorithm used by the LIS and OTD sensors create flash rates that are relatively insensitive to reasonable variations in the clustering algorithms.

  5. Digital algorithms for parallel pipelined single-detector homodyne fringe counting in laser interferometry

    NASA Astrophysics Data System (ADS)

    Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef

    2016-12-01

    The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.

  6. Changes to the Spectral Extraction Algorithm at the Third COS FUV Lifetime Position

    NASA Astrophysics Data System (ADS)

    Taylor, Joanna M.; Azalee Bostroem, K.; Debes, John H.; Ely, Justin; Hernandez, Svea; Hodge, Philip E.; Jedrzejewski, Robert I.; Lindsay, Kevin; Lockwood, Sean A.; Massa, Derck; Oliveira, Cristina M.; Penton, Steven V.; Proffitt, Charles R.; Roman-Duval, Julia; Sahnow, David J.; Sana, Hugues; Sonnentrucker, Paule

    2015-01-01

    Due to the effects of gain sag on flux on the COS FUV microchannel plate detector, the COS FUV spectra will be moved in February 2015 to a pristine location on the detector, from Lifetime Position 2 (LP2) to LP3. The spectra will be shifted in the cross-dispersion (XD) direction by -2.5", about -31 pixels, from the original LP1. In contrast, LP2 was shifted by +3.5", about 41 pixels, from LP1. By reducing the LP3-LP1 separation compared to the LP2-LP1 separation, we achieve maximal spectral resolution at LP3 while preserving more detector area for future lifetime positions. In the current version of the COS boxcar extraction algorithm, flux is summed within a box of fixed height that is larger than the PSF. Bad pixels located anywhere within the extraction box cause the entire column to be discarded. At the new LP3 position the current extraction box will overlap with LP1 regions of low gain (pixels which have lost >5% of their sensitivity). As a result, large portions of spectra will be discarded, even though these flagged pixels will be located in the wings of the profiles and contain a negligible fraction of the total source flux. To avoid unnecessarily discarding columns affected by such pixels, an algorithm is needed that can judge whether the effects of gain-sagged pixels on the extracted flux are significant. The "two-zone" solution adopted for pipeline use was tailored specifically for the COS FUV data characteristics: First, using a library of 1-D spectral centroid ("trace") locations, residual geometric distortions in the XD direction are removed. Next, 2-D template profiles are aligned with the observed spectral image. Encircled energy contours are calculated and an inner zone that contains 80% of the flux is defined, as well as an outer zone that contains 99% of the flux. With this approach, only pixels flagged as bad in the inner 80% zone will cause columns to be discarded while flagged pixels in the outer zones do not affect extraction. Finally, all good columns are summed in the XD direction to obtain a 1-D extracted spectrum. We present examples of the trace and profile libraries that are used in the two-zone extraction and compare the performance of the two-zone and boxcar algorithms.

  7. Load Balancing in Stochastic Networks: Algorithms, Analysis, and Game Theory

    DTIC Science & Technology

    2014-04-16

    SECURITY CLASSIFICATION OF: The classic randomized load balancing model is the so-called supermarket model, which describes a system in which...P.O. Box 12211 Research Triangle Park, NC 27709-2211 mean-field limits, supermarket model, thresholds, game, randomized load balancing REPORT...balancing model is the so-called supermarket model, which describes a system in which customers arrive to a service center with n parallel servers according

  8. Transactions of The Army Conference on Applied Mathematics and Computing (5th) Held in West Point, New York on 15-18 June 1987

    DTIC Science & Technology

    1988-03-01

    29 Statistical Machine Learning for the Cognitive Selection of Nonlinear Programming Algorithms in Engineering Design Optimization Toward...interpolation and Interpolation by Box Spline Surfaces Charles K. Chui, Harvey Diamond, Louise A. Raphael. 301 Knot Selection for Least Squares...West Virginia University, Morgantown, West Virginia; and Louise Raphael, National Science Foundation, Washington, DC Knot Selection for Least

  9. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  10. Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less

  11. Position-adaptive explosive detection concepts for swarming micro-UAVs

    NASA Astrophysics Data System (ADS)

    Selmic, Rastko R.; Mitra, Atindra

    2008-04-01

    We have formulated a series of position-adaptive sensor concepts for explosive detection applications using swarms of micro-UAV's. These concepts are a generalization of position-adaptive radar concepts developed for challenging conditions such as urban environments. For radar applications, this concept is developed with platforms within a UAV swarm that spatially-adapt to signal leakage points on the perimeter of complex clutter environments to collect information on embedded objects-of-interest. The concept is generalized for additional sensors applications by, for example, considering a wooden cart that contains explosives. We can formulate system-of-systems concepts for a swarm of micro-UAV's in an effort to detect whether or not a given cart contains explosives. Under this new concept, some of the members of the UAV swarm can serve as position-adaptive "transmitters" by blowing air over the cart and some of the members of the UAV swarm can serve as position-adaptive "receivers" that are equipped with chem./bio sensors that function as "electronic noses". The final objective can be defined as improving the particle count for the explosives in the air that surrounds a cart via development of intelligent position-adaptive control algorithms in order to improve the detection and false-alarm statistics. We report on recent simulation results with regard to designing optimal sensor placement for explosive or other chemical agent detection. This type of information enables the development of intelligent control algorithms for UAV swarm applications and is intended for the design of future system-of-systems with adaptive intelligence for advanced surveillance of unknown regions. Results are reported as part of a parametric investigation where it is found that the probability of contaminant detection depends on the air flow that carries contaminant particles, geometry of the surrounding space, leakage areas, and other factors. We present a concept of position-adaptive detection (i.e. based on the example in the previous paragraph) consisting of position-adaptive fluid actuators (fans) and position-adaptive sensors. Based on these results, a preliminary analysis of sensor requirements for these fluid actuators and sensors is presented for small-UAVs in a field-enabled explosive detection environment. The computational fluid dynamics (CFD) simulation software Fluent is used to simulate the air flow in the corridor model containing a box with explosive particles. It is found that such flow is turbulent with Reynolds number greater than 106. Simulation methods and results are presented which show particle velocity and concentration distribution throughout the closed box. The results indicate that the CFD-based method can be used for other sensor placement and deployment optimization problems. These techniques and results can be applied towards the development of future system-of-system UAV swarms for defense, homeland defense, and security applications.

  12. False CAM alarms from radon fluctuations.

    PubMed

    Hayes, Robert

    2003-11-01

    The root cause of many false continuous air monitor (CAM) alarms is revealed for CAMs that use constant spectral shape assumptions in transuranic (TRU) alpha activity determination algorithms. This paper shows that when atmospheric radon levels continually decrease and bottom out at a minimum level, reduced false TRU count rates are not only expected but measured. Similarly, when the radon levels continually increase to a maximum level, elevated false TRU count rates were measured as predicted. The basis for expecting this dependence on changes in radon levels is discussed.

  13. New algorithm and system for measuring size distribution of blood cells

    NASA Astrophysics Data System (ADS)

    Yao, Cuiping; Li, Zheng; Zhang, Zhenxi

    2004-06-01

    In optical scattering particle sizing, a numerical transform is sought so that a particle size distribution can be determined from angular measurements of near forward scattering, which has been adopted in the measurement of blood cells. In this paper a new method of counting and classification of blood cell, laser light scattering method from stationary suspensions, is presented. The genetic algorithm combined with nonnegative least squared algorithm is employed to inverse the size distribution of blood cells. Numerical tests show that these techniques can be successfully applied to measuring size distribution of blood cell with high stability.

  14. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  15. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  16. Real-time yield estimation based on deep learning

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Sheppard, Clay

    2017-05-01

    Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.

  17. Modeling of optical quadrature microscopy for imaging mouse embryos

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2008-02-01

    Optical quadrature microscopy (OQM) has been shown to provide the optical path difference through a mouse embryo, and has led to a novel method to count the total number of cells further into development than current non-toxic imaging techniques used in the clinic. The cell counting method has the potential to provide an additional quantitative viability marker for blastocyst transfer during in vitro fertilization. OQM uses a 633 nm laser within a modified Mach-Zehnder interferometer configuration to measure the amplitude and phase of the signal beam that travels through the embryo. Four cameras preceded by multiple beamsplitters record the four interferograms that are used within a reconstruction algorithm to produce an image of the complex electric field amplitude. Here we present a model for the electric field through the primary optical components in the imaging configuration and the reconstruction algorithm to calculate the signal to noise ratio when imaging mouse embryos. The model includes magnitude and phase errors in the individual reference and sample paths, fixed pattern noise, and noise within the laser and detectors. This analysis provides the foundation for determining the imaging limitations of OQM and the basis to optimize the cell counting method in order to introduce additional quantitative viability markers.

  18. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  19. A matrix-algebraic formulation of distributed-memory maximal cardinality matching algorithms in bipartite graphs

    DOE PAGES

    Azad, Ariful; Buluç, Aydın

    2016-05-16

    We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less

  20. A Prescription for List-Mode Data Processing Conventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beddingfield, David H.; Swinhoe, Martyn Thomas; Huszti, Jozsef

    There are a variety of algorithmic approaches available to process list-mode pulse streams to produce multiplicity histograms for subsequent analysis. In the development of the INCC v6.0 code to include the processing of this data format, we have noted inconsistencies in the “processed time” between the various approaches. The processed time, tp, is the time interval over which the recorded pulses are analyzed to construct multiplicity histograms. This is the time interval that is used to convert measured counts into count rates. The observed inconsistencies in tp impact the reported count rate information and the determination of the error-values associatedmore » with the derived singles, doubles, and triples counting rates. This issue is particularly important in low count-rate environments. In this report we will present a prescription for the processing of list-mode counting data that produces values that are both correct and consistent with traditional shift-register technologies. It is our objective to define conventions for list mode data processing to ensure that the results are physically valid and numerically aligned with the results from shift-register electronics.« less

  1. Clustering method for counting passengers getting in a bus with single camera

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying

    2010-03-01

    Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.

  2. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  3. [Medical computer-aided detection method based on deep learning].

    PubMed

    Tao, Pan; Fu, Zhongliang; Zhu, Kai; Wang, Lili

    2018-03-01

    This paper performs a comprehensive study on the computer-aided detection for the medical diagnosis with deep learning. Based on the region convolution neural network and the prior knowledge of target, this algorithm uses the region proposal network, the region of interest pooling strategy, introduces the multi-task loss function: classification loss, bounding box localization loss and object rotation loss, and optimizes it by end-to-end. For medical image it locates the target automatically, and provides the localization result for the next stage task of segmentation. For the detection of left ventricular in echocardiography, proposed additional landmarks such as mitral annulus, endocardial pad and apical position, were used to estimate the left ventricular posture effectively. In order to verify the robustness and effectiveness of the algorithm, the experimental data of ultrasonic and nuclear magnetic resonance images are selected. Experimental results show that the algorithm is fast, accurate and effective.

  4. ePMV embeds molecular modeling into professional animation software environments.

    PubMed

    Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J

    2011-03-09

    Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. ePMV Embeds Molecular Modeling into Professional Animation Software Environments

    PubMed Central

    Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.

    2011-01-01

    SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181

  6. Tracked robot controllers for climbing obstacles autonomously

    NASA Astrophysics Data System (ADS)

    Vincent, Isabelle

    2009-05-01

    Research in mobile robot navigation has demonstrated some success in navigating flat indoor environments while avoiding obstacles. However, the challenge of analyzing complex environments to climb obstacles autonomously has had very little success due to the complexity of the task. Unmanned ground vehicles currently exhibit simple autonomous behaviours compared to the human ability to move in the world. This paper presents the control algorithms designed for a tracked mobile robot to autonomously climb obstacles by varying its tracks configuration. Two control algorithms are proposed to solve the autonomous locomotion problem for climbing obstacles. First, a reactive controller evaluates the appropriate geometric configuration based on terrain and vehicle geometric considerations. Then, a reinforcement learning algorithm finds alternative solutions when the reactive controller gets stuck while climbing an obstacle. The methodology combines reactivity to learning. The controllers have been demonstrated in box and stair climbing simulations. The experiments illustrate the effectiveness of the proposed approach for crossing obstacles.

  7. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  8. ProMotE: an efficient algorithm for counting independent motifs in uncertain network topologies.

    PubMed

    Ren, Yuanfang; Sarkar, Aisharjya; Kahveci, Tamer

    2018-06-26

    Identifying motifs in biological networks is essential in uncovering key functions served by these networks. Finding non-overlapping motif instances is however a computationally challenging task. The fact that biological interactions are uncertain events further complicates the problem, as it makes the existence of an embedding of a given motif an uncertain event as well. In this paper, we develop a novel method, ProMotE (Probabilistic Motif Embedding), to count non-overlapping embeddings of a given motif in probabilistic networks. We utilize a polynomial model to capture the uncertainty. We develop three strategies to scale our algorithm to large networks. Our experiments demonstrate that our method scales to large networks in practical time with high accuracy where existing methods fail. Moreover, our experiments on cancer and degenerative disease networks show that our method helps in uncovering key functional characteristics of biological networks.

  9. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  10. A Human-in-the Loop Exploration of the Dynamic Airspace Configuration Concept

    NASA Technical Reports Server (NTRS)

    Homola, Jeffrey; Lee, Paul U.; Prevot, Thomas; Lee, Hwasoo; Kessell, Angela; Brasil, Connie; Smith, Nancy

    2010-01-01

    An exploratory human-in-the-loop study was conducted to better understand the impact of Dynamic Airspace Configuration (DAC) on air traffic controllers. To do so, a range of three progressively more aggressive algorithmic approaches to sectorizations were chosen. Sectorizations from these algorithms were used to test and quantify the range of impact on the controller and traffic. Results show that traffic count was more equitably distributed between the four test sectors and duration of counts over MAP were progressively lower as the magnitude of boundary change increased. However, taskload and workload were also shown to increase with the increase in aggressiveness and acceptability of the boundary changes decreased. Overall, simulated operations of the DAC concept did not appear to compromise safety. Feedback from the participants highlighted the importance of limiting some aspects of boundary changes such as amount of volume gained or lost and the extent of change relative to the initial airspace design.

  11. Long-Term Bioeffects of 435MHz Radiofrequency Radiation on Selected Blood-Borne Endpoints in Cannulated Rats. Volume 2. Plasma ACTH (adrenocorticotropic Hormone) and Plasma Corticosterone

    DTIC Science & Technology

    1987-08-01

    B.- and Soutas. Dpmptrio; I 13a. TYPE OF REPORT 13b. TIME COVERED 114. DATE OF REPORT (Year, Month, Day) S. PAGE COUNT Final FROM84/8/20 TO 86/2/16...cages and placed in sampling boxes. At this moment a first 0.2 mL sample of blood was withdrawn. At 0 time (20 min later) another 0.2 mL sample of blood...concentration data scatter diagram (exposure group) ............. . . . . . . . . . . . . . . . 18 10. Mean plasma ACTH concentrations versus time ....... 19

  12. UDATE1: A computer program for the calculation of uranium-series isotopic ages

    USGS Publications Warehouse

    Rosenbauer, R.J.

    1991-01-01

    UDATE1 is a FORTRAN-77 program with an interface for an Apple Macintosh computer that calculates isotope activities from measured count rates to date geologic materials by uranium-series disequilibria. Dates on pure samples can be determined directly by the accumulation of 230Th from 234U and of 231Pa from 235U. Dates for samples contaminated by clays containing abundant natural thorium can be corrected by the program using various mixing models. Input to the program and file management are made simple and user friendly by a series of Macintosh modal dialog boxes. ?? 1991.

  13. Limits on the Secular Drift of the TMI Calibration

    NASA Astrophysics Data System (ADS)

    Wilheit, T. T.; Farrar, S.; Jones, L.; Santos-Garcia, A.

    2012-12-01

    Data from the TRMM Microwave Imager (TMI) can be applied to the problem of determining the trend in oceanic precipitation over more than a decade. It is thus critical to know if the calibration of the instrument has any drift over this time scale. Recently a set of Windsat data with a self-consistent calibration covering July 2005 through June of 2006 and all of 2011 has become available. The mission of Windsat, determining the feasibility of measuring oceanic wind speed and direction, requires extraordinary attention to instrument calibration. With TRMM being in a low inclination orbit and Windsat in a near polar sun synchronous orbit, there are many observations coincident in space and nearly coincident in time. A data set has been assembled where the observations are averaged over 1 degree boxes of latitude and longitude and restricted to a maximum of 1 hour time difference. University of Central Florida (UCF) compares the two radiometers by computing radiances based on Global Data Assimilation System (GDAS) analyses for all channels of each radiometer for each box and computing double differences for corresponding channels. The algorithm is described in detail by Biswas et al., (2012). Texas A&M (TAMU) uses an independent implementation of GDAS-based algorithm and another where the radiances of Windsat are used to compute Sea Surface Temperature, Sea Surface Wind Speed, Precipitable Water and Cloud Liquid Water for each box. These are, in turn, used to compute the TMI radiances. These two algorithms have been described in detail by Wilheit (2012). Both teams apply stringent filters to the boxes to assure that the conditions are consistent with the model assumptions. Examination of both teams' results indicates that the drift is less than 0.04K over the 5 ½ year span for the 10 and 37 GHz channels of TMI. The 19 and 21 GHz channels have somewhat larger differences, but they are more influenced by atmospheric changes. Given the design of the instruments, it is hard to conceive of a calibration drift that would differ significantly across the channels. It also seems unlikely that TMI and Windsat are drifting synchronously. Thus we can place an upper limit of 0.01K/year on the calibration drift of the TMI. This value may be reduced with future refinement and the contributions of the other X-CAL teams. References Biswas, S.K., S. Farrar, K. Gopalan, A Santos-Garcia, L. Jones and S. Bilanow (2012), "Inter-Calibration of Microwave Radiometer Brightness Temperatures for the Global Precipitation Measurement Mission" to be published in TGRS. Wilheit, T., 2012 "Comparing Calibrations of Similar Conically-Scanning Window-Channel Microwave Radiometers" to be published in TGRS.

  14. Pothole Detection System Using a Black-box Camera.

    PubMed

    Jo, Youngtae; Ryu, Seungki

    2015-11-19

    Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  15. Prediction of AL and Dst Indices from ACE Measurements Using Hybrid Physics/Black-Box Techniques

    NASA Astrophysics Data System (ADS)

    Spencer, E.; Rao, A.; Horton, W.; Mays, L.

    2008-12-01

    ACE measurements of the solar wind velocity, IMF and proton density is used to drive a hybrid Physics/Black- Box model of the nightside magnetosphere. The core physics is contained in a low order nonlinear dynamical model of the nightside magnetosphere called WINDMI. The model is augmented by wavelet based nonlinear mappings between the solar wind quantities and the input into the physics model, followed by further wavelet based mappings of the model output field aligned currents onto the ground based magnetometer measurements of the AL index and Dst index. The black box mappings are introduced at the input stage to account for uncertainties in the way the solar wind quantities are transported from the ACE spacecraft at L1 to the magnetopause. Similar mappings are introduced at the output stage to account for a spatially and temporally varying westward auroral electrojet geometry. The parameters of the model are tuned using a genetic algorithm, and trained using the large geomagnetic storm dataset of October 3-7 2000. It's predictive performance is then evaluated on subsequent storm datasets, in particular the April 15-24 2002 storm. This work is supported by grant NSF 7020201

  16. Detection of vehicle parts based on Faster R-CNN and relative position information

    NASA Astrophysics Data System (ADS)

    Zhang, Mingwen; Sang, Nong; Chen, Youbin; Gao, Changxin; Wang, Yongzhong

    2018-03-01

    Detection and recognition of vehicles are two essential tasks in intelligent transportation system (ITS). Currently, a prevalent method is to detect vehicle body, logo or license plate at first, and then recognize them. So the detection task is the most basic, but also the most important work. Besides the logo and license plate, some other parts, such as vehicle face, lamp, windshield and rearview mirror, are also key parts which can reflect the characteristics of vehicle and be used to improve the accuracy of recognition task. In this paper, the detection of vehicle parts is studied, and the work is novel. We choose Faster R-CNN as the basic algorithm, and take the local area of an image where vehicle body locates as input, then can get multiple bounding boxes with their own scores. If the box with maximum score is chosen as final result directly, it is often not the best one, especially for small objects. This paper presents a method which corrects original score with relative position information between two parts. Then we choose the box with maximum comprehensive score as the final result. Compared with original output strategy, the proposed method performs better.

  17. Modeling of Particle Acceleration at Multiple Shocks via Diffusive Shock Acceleration: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Parker, L. Neergaard; Zank, G. P.

    2013-01-01

    Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box. We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (E(sub max)) appropriate for quasi-parallel and quasi-perpendicular shocks and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).

  18. Peripheral intravenous and central catheter algorithm: a proactive quality initiative.

    PubMed

    Wilder, Kerry A; Kuehn, Susan C; Moore, James E

    2014-12-01

    Peripheral intravenous (PIV) infiltrations causing tissue damage is a global issue surrounded by situations that make vascular access decisions difficult. The purpose of this quality improvement project was to develop an algorithm and assess its effectiveness in reducing PIV infiltrations in neonates. The targeted subjects were all infants in our neonatal intensive care unit (NICU) with a PIV catheter. We completed a retrospective chart review of the electronic medical record to collect 4th quarter 2012 baseline data. Following adoption of the algorithm, we also performed a daily manual count of all PIV catheters in the 1st and 2nd quarters 2013. Daily PIV days were defined as follows: 1 patient with a PIV catheter equals 1 PIV day. An infant with 2 PIV catheters in place was counted as 2 PIV days. Our rate of infiltration or tissue damage was determined by counting the number of events and dividing by the number of PIV days. The rate of infiltration or tissue damage was reported as the number of events per 100 PIV days. The number of infiltrations and PIV catheters was collected from the electronic medical record and also verified manually by daily assessment after adoption of the algorithm. To reduce the rate of PIV infiltrations leading to grade 4 infiltration and tissue damage by at least 30% in the NICU population. Incidence of PIV infiltrations/100 catheter days. The baseline rate for total infiltrations increased slightly from 5.4 to 5.68/100 PIV days (P = .397) for the NICU. We attributed this increase to heightened awareness and better reporting. Grade 4 infiltrations decreased from 2.8 to 0.83/100 PIV catheter days (P = .00021) after the algorithm was implemented. Tissue damage also decreased from 0.68 to 0.3/100 PIV days (P = .11). Statistical analysis used the Fisher exact test and reported as statistically significant at P < .05. Our findings suggest that utilization of our standardized decision pathway was instrumental in providing guidance for problem solving related to vascular access decisions. We feel this contributed to the overall reduction in grade 4 intravenous infiltration and tissue damage rates. Grade 4 infiltration reductions were highly statistically significant (P = .00021).

  19. A multichannel block-matching denoising algorithm for spectral photon-counting CT images.

    PubMed

    Harrison, Adam P; Xu, Ziyue; Pourmorteza, Amir; Bluemke, David A; Mollura, Daniel J

    2017-06-01

    We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). We outline a multi-channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state-of-the-art, single-channel approach. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  20. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

Top