Sample records for thresholding blob filtering

  1. Demonstration of a single-wavelength spectral-imaging-based Thai jasmine rice identification

    NASA Astrophysics Data System (ADS)

    Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2011-07-01

    A single-wavelength spectral-imaging-based Thai jasmine rice breed identification is demonstrated. Our nondestructive identification approach relies on a combination of fluorescent imaging and simple image processing techniques. Especially, we apply simple image thresholding, blob filtering, and image subtracting processes to either a 545 or a 575nm image in order to identify our desired Thai jasmine rice breed from others. Other key advantages include no waste product and fast identification time. In our demonstration, UVC light is used as our exciting light, a liquid crystal tunable optical filter is used as our wavelength seclector, and a digital camera with 640activepixels×480activepixels is used to capture the desired spectral image. Eight Thai rice breeds having similar size and shape are tested. Our experimental proof of concept shows that by suitably applying image thresholding, blob filtering, and image subtracting processes to the selected fluorescent image, the Thai jasmine rice breed can be identified with measured false acceptance rates of <22.9% and <25.7% for spectral images at 545 and 575nm wavelengths, respectively. A measured fast identification time is 25ms, showing high potential for real-time applications.

  2. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  3. Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    PubMed Central

    Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang

    2011-01-01

    This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990

  4. Using pyramids to define local thresholds for blob detection.

    PubMed

    Shneier, M

    1983-03-01

    A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.

  5. Fusion of KLMS and blob based pre-screener for buried landmine detection using ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Baydar, Bora; Akar, Gözde Bozdaǧi.; Yüksel, Seniha E.; Öztürk, Serhat

    2016-05-01

    In this paper, a decision level fusion using multiple pre-screener algorithms is proposed for the detection of buried landmines from Ground Penetrating Radar (GPR) data. The Kernel Least Mean Square (KLMS) and the Blob Filter pre-screeners are fused together to work in real time with less false alarms and higher true detection rates. The effect of the kernel variance is investigated for the KLMS algorithm. Also, the results of the KLMS and KLMS+Blob filter algorithms are compared to the LMS method in terms of processing time and false alarm rates. Proposed algorithm is tested on both simulated data and real data collected at the field of IPA Defence at METU, Ankara, Turkey.

  6. THE BLOB CONNECTION: SEARCHING FOR LOW CORONAL SIGNATURES OF SOLAR POST-CME BLOBS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanche, Nicole E; Reeves, Katharine K; Webb, David F., E-mail: nschanche@cfa.harvard.edu

    2016-11-01

    Bright linear structures, thought to be indicators of a current sheet (CS), are often seen in Large Angle and Spectrometric Coronagraph (LASCO) on the Solar and Heliospheric Observatory (SOHO) white-light data in the wake of coronal mass ejections (CMEs). In a subset of these post-CME structures, relatively bright blobs are seen moving outward along the rays. These blobs have been interpreted as consequences of the plasmoid instability in the CS, and can help us to understand the dynamics of the reconnection. We examine several instances, taken largely from the SOHO /LASCO CME-rays Catalog, where these blobs are clearly visible inmore » white-light data. Using radially filtered, difference, wavelet enhanced, and multiscale Gaussian normalized images to visually inspect Solar Dynamics Observatory /Atmospheric Imaging Assembly (AIA) data in multiple wavelengths, we look for signatures of material that correspond both temporally and spatially to the later appearance of the blobs in LASCO/C2. Constraints from measurements of the blobs allow us to predict the expected count rates in DN pixel{sup −1} s{sup −1} for each AIA channel. The resulting values would make the blobs bright enough to be detectable at 1.2 R {sub ⊙}. However, we do not see conclusive evidence for corresponding blobs in the AIA data in any of the events. We do the same calculation for the “cartwheel CME,” an event in which blobs were seen in X-rays, and find that our estimated count rates are close to those observed. We suggest several possibilities for the absence of the EUV blobs including the formation of the blob higher than the AIA field of view, blob coalescence, and overestimation of blob densities.« less

  7. Separate first- and second-order processing is supported by spatial summation estimates at the fovea and eccentrically.

    PubMed

    Sukumar, Subash; Waugh, Sarah J

    2007-03-01

    We estimated spatial summation areas for the detection of luminance-modulated (LM) and contrast-modulated (CM) blobs at the fovea, 2.5, 5 and 10 deg eccentrically. Gaussian profiles were added or multiplied to binary white noise to create LM and CM blob stimuli and these were used to psychophysically estimate detection thresholds and spatial summation areas. The results reveal significantly larger summation areas for detecting CM than LM blobs across eccentricity. These differences are comparable to receptive field size estimates made in V1 and V2. They support the notion that separate spatial processing occurs for the detection of LM and CM stimuli.

  8. Detection of blob objects in microscopic zebrafish images based on gradient vector diffusion.

    PubMed

    Li, Gang; Liu, Tianming; Nie, Jingxin; Guo, Lei; Malicki, Jarema; Mara, Andrew; Holley, Scott A; Xia, Weiming; Wong, Stephen T C

    2007-10-01

    The zebrafish has become an important vertebrate animal model for the study of developmental biology, functional genomics, and disease mechanisms. It is also being used for drug discovery. Computerized detection of blob objects has been one of the important tasks in quantitative phenotyping of zebrafish. We present a new automated method that is able to detect blob objects, such as nuclei or cells in microscopic zebrafish images. This method is composed of three key steps. The first step is to produce a diffused gradient vector field by a physical elastic deformable model. In the second step, the flux image is computed on the diffused gradient vector field. The third step performs thresholding and nonmaximum suppression based on the flux image. We report the validation and experimental results of this method using zebrafish image datasets from three independent research labs. Both sensitivity and specificity of this method are over 90%. This method is able to differentiate closely juxtaposed or connected blob objects, with high sensitivity and specificity in different situations. It is characterized by a good, consistent performance in blob object detection.

  9. Head Rotation Detection in Marmoset Monkeys

    NASA Astrophysics Data System (ADS)

    Simhadri, Sravanthi

    Head movement is known to have the benefit of improving the accuracy of sound localization for humans and animals. Marmoset is a small bodied New World monkey species and it has become an emerging model for studying the auditory functions. This thesis aims to detect the horizontal and vertical rotation of head movement in marmoset monkeys. Experiments were conducted in a sound-attenuated acoustic chamber. Head movement of marmoset monkey was studied under various auditory and visual stimulation conditions. With increasing complexity, these conditions are (1) idle, (2) sound-alone, (3) sound and visual signals, and (4) alert signal by opening and closing of the chamber door. All of these conditions were tested with either house light on or off. Infra-red camera with a frame rate of 90 Hz was used to capture of the head movement of monkeys. To assist the signal detection, two circular markers were attached to the top of monkey head. The data analysis used an image-based marker detection scheme. Images were processed using the Computation Vision Toolbox in Matlab. The markers and their positions were detected using blob detection techniques. Based on the frame-by-frame information of marker positions, the angular position, velocity and acceleration were extracted in horizontal and vertical planes. Adaptive Otsu Thresholding, Kalman filtering and bound setting for marker properties were used to overcome a number of challenges encountered during this analysis, such as finding image segmentation threshold, continuously tracking markers during large head movement, and false alarm detection. The results show that the blob detection method together with Kalman filtering yielded better performances than other image based techniques like optical flow and SURF features .The median of the maximal head turn in the horizontal plane was in the range of 20 to 70 degrees and the median of the maximal velocity in horizontal plane was in the range of a few hundreds of degrees per second. In comparison, the natural alert signal -- door opening and closing -- evoked the faster head turns than other stimulus conditions. These results suggest that behaviorally relevant stimulus such as alert signals evoke faster head-turn responses in marmoset monkeys.

  10. Extraction of Extended Small-Scale Objects in Digital Images

    NASA Astrophysics Data System (ADS)

    Volkov, V. Y.

    2015-05-01

    Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.

  11. Small blob identification in medical images using regional features from optimum scale.

    PubMed

    Zhang, Min; Wu, Teresa; Bennett, Kevin M

    2015-04-01

    Recent advances in medical imaging technology have greatly enhanced imaging-based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this research, we are interested in one type of imaging objects: small blobs. Examples of small blob objects are cells in histopathology images, glomeruli in MR images, etc. This problem is particularly challenging because the small blobs often have in homogeneous intensity distribution and an indistinct boundary against the background. Yet, in general, these blobs have similar sizes. Motivated by this finding, we propose a novel detector termed Hessian-based Laplacian of Gaussian (HLoG) using scale space theory as the foundation. Like most imaging detectors, an image is first smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale on which a presegmentation is conducted. The advantage of the Hessian process is that it is capable of delineating the blobs. As a result, regional features can be retrieved. These features enable an unsupervised clustering algorithm for postpruning which should be more robust and sensitive than the traditional threshold-based postpruning commonly used in most imaging detectors. To test the performance of the proposed HLoG, two sets of 2-D grey medical images are studied. HLoG is compared against three state-of-the-art detectors: generalized LoG, Radial-Symmetry and LoG using precision, recall, and F-score metrics.We observe that HLoG statistically outperforms the compared detectors.

  12. Retina lesion and microaneurysm segmentation using morphological reconstruction methods with ground-truth data.

    PubMed

    Karnowski, Thomas P; Govindasamy, V; Tobin, Kenneth W; Chaum, Edward; Abramoff, M D

    2008-01-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  13. Optical penetration-based silkworm pupa gender sensor structure.

    PubMed

    Sumriddetchkajorn, Sarun; Kamtongdee, Chakkrit

    2012-02-01

    This paper proposes and experimentally demonstrates for what is believed to be the first time a highly sought-after optical structure for highly-accurate identification of the silkworm pupa gender. The key idea is to exploit a long wavelength optical beam in the red or near infrared spectrum that can effectively and safely penetrate the body of a silkworm pupa. Later on, simple image processing operations via image thresholding, blob filtering, and image inversion processes are applied in order to eliminate the unwanted image noises and at the same time highlight the gender gland. Experimental proof of concept using three 636 nm wavelength light emitting diodes, a two-dimensional web camera, an 8 bit microcontroller board, and a notebook computer shows a very high 95.6% total accuracy in identifying the gender of 45 silkworm pupae with a measured fast identification time of 96.6 ms. Other key features include low cost, low component counts, and ease of implementation and control.

  14. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  15. An algorithm to track laboratory zebrafish shoals.

    PubMed

    Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia

    2018-05-01

    In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Real-time marker-free motion capture system using blob feature analysis

    NASA Astrophysics Data System (ADS)

    Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho

    2005-02-01

    This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.

  17. Automatic rock detection for in situ spectroscopy applications on Mars

    NASA Astrophysics Data System (ADS)

    Mahapatra, Pooja; Foing, Bernard H.

    A novel algorithm for rock detection has been developed for effectively utilising Mars rovers, and enabling autonomous selection of target rocks that require close-contact spectroscopic measurements. The algorithm demarcates small rocks in terrain images as seen by cameras on a Mars rover during traverse. This information may be used by the rover for selection of geologically relevant sample rocks, and (in conjunction with a rangefinder) to pick up target samples using a robotic arm for automatic in situ determination of rock composition and mineralogy using, for example, a Raman spectrometer. Determining rock samples within the region that are of specific interest without physically approaching them significantly reduces time, power and risk. Input images in colour are converted to greyscale for intensity analysis. Bilateral filtering is used for texture removal while preserving rock boundaries. Unsharp masking is used for contrast enhance-ment. Sharp contrasts in intensities are detected using Canny edge detection, with thresholds that are calculated from the image obtained after contrast-limited adaptive histogram equalisation of the unsharp masked image. Scale-space representations are then generated by convolving this image with a Gaussian kernel. A scale-invariant blob detector (Laplacian of the Gaussian, LoG) detects blobs independently of their sizes, and therefore requires a multi-scale approach with automatic scale se-lection. The scale-space blob detector consists of convolution of the Canny edge-detected image with a scale-normalised LoG at several scales, and finding the maxima of squared LoG response in scale-space. After the extraction of local intensity extrema, the intensity profiles along rays going out of the local extremum are investigated. An ellipse is fitted to the region determined by significant changes in the intensity profiles. The fitted ellipses are overlaid on the original Mars terrain image for a visual estimation of the rock detection accuracy, and the number of ellipses are counted. Since geometry and illumination have the least effect on small rocks, the proposed algorithm is effective in detecting small rocks (or bigger rocks at larger distances from the camera) that consist of a small fraction of image pixels. Acknowledgements: The first author would like to express her gratitude to the European Space Agency (ESA/ESTEC) and the International Lunar Exploration Working Group (ILEWG) for their support of this work.

  18. Multi-object tracking of human spermatozoa

    NASA Astrophysics Data System (ADS)

    Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen

    2008-03-01

    We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.

  19. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  20. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  1. Confocal microscopy and 3-D distribution of dead cells in cryopreserved pancreatic islets

    NASA Astrophysics Data System (ADS)

    Merchant, Fatima A.; Aggarwal, Shanti J.; Diller, Kenneth R.; Bartels, Keith A.; Bovik, Alan C.

    1992-06-01

    Our laboratory is involved in studies of changes in shape and size of biological specimens under osmotic stress at ambient and sub-zero temperatures. This paper describes confocal microscopy, image processing and analysis of 3-D distribution of cells in acridine orange/propidium iodide (AO/PI) fluorescent stained frozen-thawed islet of Langerhans. Isolated and cultured rat pancreatic islets were frozen and thawed in 2 M dimethylsulfoxide and examined under a Zeiss laser scanning confocal microscope. Two micrometers to five micrometers serial sections of the islets were obtained and processed to obtain high contrast images which were later processed in two steps. The first step consisted of the isolation of the region of interest by template masking followed by grey level thresholding to obtain a binary image. Three-dimensional blob coloring algorithm was applied and the number of voxels in each region and the number of regions were counted. The volumetric distribution of the dead cells in the islets was computed by calculating the distance from the center of each blob to the centroid of the 3-D image. An increase in the number of blobs moving from the center toward the periphery of the islet was observed indicating that the freeze damage was more concentrated in the outer edges of the islet.

  2. Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices

    NASA Astrophysics Data System (ADS)

    Sentana, I. W. B.; Jawas, N.; Asri, S. A.

    2018-01-01

    Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.

  3. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  4. Blob structure and motion in the edge and SOL of NSTX

    DOE PAGES

    Zweben, S. J.; Myra, J. R.; Davis, W. M.; ...

    2016-01-28

    Here, the structure and motion of discrete plasma blobs (a.k.a. filaments) in the edge and scrape-off layer of NSTX is studied for representative Ohmic and H-mode discharges. Individual blobs were tracked in the 2D radial versus poloidal plane using data from the gas puff imaging diagnostic taken at 400 000 frames s -1. A database of blob amplitude, size, ellipticity, tilt, and velocity was obtained for ~45 000 individual blobs. Empirical relationships between various properties are described, e.g. blob speed versus amplitude and blob tilt versus ellipticity. The blob velocities are also compared with analytic models.

  5. Visualizing and Quantifying Blob Characteristics on NSTX

    NASA Astrophysics Data System (ADS)

    Davis, William; Zweben, Stewart; Myra, James; D'Ippolito, Daniel; Ko, Matthew

    2012-10-01

    Understanding the radial motion of blob-filaments in the tokamak edge plasma is important since this motion can affect the width of the heat and particle scrape-off layer (SOL) [1]. High resolution (64x80), high speed (400,000 frames/sec) edge turbulence movies taken of the NSTX outer midplane separatrix region have recently been analyzed for blob motion. Regions of high light emission from gas puff imaging within a 25x30 cm cross-section were used to track blob-filaments in the plasma edge and into the SOL. Software tools have been developed for visualizing blob movement and automatically generating statistics of blob speed, shape, amplitude, size, and orientation; thousands of blobs have been analyzed for dozens of shots. The blob tracking algorithm and resulting database entries are explained in detail. Visualization tools also show how poloidal and radial motion change as blobs move through the scrape-off-layer (SOL), e.g. suggesting the influence of sheared flow. Relationships between blob size and velocity are shown for various types of plasmas and compared with simplified theories of blob motion. This work was supported by DOE Contract DE-AC02-09-CH11466. [4pt] [1] J.R. Myra et al, Phys. Plasmas 18, 012305 (2011)

  6. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  7. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  8. Development of high damage threshold laser-machined apodizers and gain filters for laser applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rambo, Patrick; Schwarz, Jens; Kimmel, Mark

    We have developed high damage threshold filters to modify the spatial profile of a high energy laser beam. The filters are formed by laser ablation of a transmissive window. The ablation sites constitute scattering centers which can be filtered in a subsequent spatial filter. Finally, by creating the filters in dielectric materials, we see an increased laser-induced damage threshold from previous filters created using ‘metal on glass’ lithography.

  9. Development of high damage threshold laser-machined apodizers and gain filters for laser applications

    DOE PAGES

    Rambo, Patrick; Schwarz, Jens; Kimmel, Mark; ...

    2016-09-27

    We have developed high damage threshold filters to modify the spatial profile of a high energy laser beam. The filters are formed by laser ablation of a transmissive window. The ablation sites constitute scattering centers which can be filtered in a subsequent spatial filter. Finally, by creating the filters in dielectric materials, we see an increased laser-induced damage threshold from previous filters created using ‘metal on glass’ lithography.

  10. WFC3/IR Blob Monitoring

    NASA Astrophysics Data System (ADS)

    Sunnquist, Ben

    2018-06-01

    Throughout the lifetime of WFC3, a growing number of 'blobs' (small, circular regions with slightly decreased sensitivity) have appeared in WFC3/IR images. In this report, we present the current workflow used for identifying, characterizing and flagging new IR blobs. We also describe the methods currently used to monitor the repeatability of the channel select mechanism (CSM) movements as a way to ensure that the CSM is still operating normally as these new blobs form. A full listing of all known blobs, which incorporates the work from past blob monitoring efforts, is presented in the Appendix as well as all of the IR bad pixel tables generated to include the strongest of these blobs. These tables, along with all of the other relevant figures and tables in this report, will be continuously updated as new blobs form.

  11. Visual adaptation and the amplitude spectra of radiological images.

    PubMed

    Kompaniez-Dunigan, Elysse; Abbey, Craig K; Boone, John M; Webster, Michael A

    2018-01-01

    We examined how visual sensitivity and perception are affected by adaptation to the characteristic amplitude spectra of X-ray mammography images. Because of the transmissive nature of X-ray photons, these images have relatively more low-frequency variability than natural images, a difference that is captured by a steeper slope of the amplitude spectrum (~ - 1.5) compared to the ~ 1/f (slope of - 1) spectra common to natural scenes. Radiologists inspecting these images are therefore exposed to a different balance of spectral components, and we measured how this exposure might alter spatial vision. Observers (who were not radiologists) were adapted to images of normal mammograms or the same images sharpened by filtering the amplitude spectra to shallower slopes. Prior adaptation to the original mammograms significantly biased judgments of image focus relative to the sharpened images, demonstrating that the images are sufficient to induce substantial after-effects. The adaptation also induced strong losses in threshold contrast sensitivity that were selective for lower spatial frequencies, though these losses were very similar to the threshold changes induced by the sharpened images. Visual search for targets (Gaussian blobs) added to the images was also not differentially affected by adaptation to the original or sharper images. These results complement our previous studies examining how observers adapt to the textural properties or phase spectra of mammograms. Like the phase spectrum, adaptation to the amplitude spectrum of mammograms alters spatial sensitivity and visual judgments about the images. However, unlike the phase spectrum, adaptation to the amplitude spectra did not confer a selective performance advantage relative to more natural spectra.

  12. Formation and evolution of coronal rain observed by SDO/AIA on February 22, 2012

    NASA Astrophysics Data System (ADS)

    Vashalomidze, Z.; Kukhianidze, V.; Zaqarashvili, T. V.; Oliver, R.; Shergelashvili, B.; Ramishvili, G.; Poedts, S.; De Causmaecker, P.

    2015-05-01

    Context. The formation and dynamics of coronal rain are currently not fully understood. Coronal rain is the fall of cool and dense blobs formed by thermal instability in the solar corona towards the solar surface with acceleration smaller than gravitational free fall. Aims: We aim to study the observational evidence of the formation of coronal rain and to trace the detailed dynamics of individual blobs. Methods: We used time series of the 171 Å and 304 Å spectral lines obtained by the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamic Observatory (SDO) above active region AR 11420 on February 22, 2012. Results: Observations show that a coronal loop disappeared in the 171 Å channel and appeared in the 304 Å line more than one hour later, which indicates a rapid cooling of the coronal loop from 1 MK to 0.05 MK. An energy estimation shows that the radiation is higher than the heat input, which indicates so-called catastrophic cooling. The cooling was accompanied by the formation of coronal rain in the form of falling cold plasma. We studied two different sequences of falling blobs. The first sequence includes three different blobs. The mean velocities of the blobs were estimated to be 50 km s-1, 60 km s-1 and 40 km s-1. A polynomial fit shows the different values of the acceleration for different blobs, which are lower than free-fall in the solar corona. The first and second blob move along the same path, but with and without acceleration, respectively. We performed simple numerical simulations for two consecutive blobs, which show that the second blob moves in a medium that is modified by the passage of the first blob. Therefore, the second blob has a relatively high speed and no acceleration, as is shown by observations. The second sequence includes two different blobs with mean velocities of 100 km s-1 and 90 km s-1, respectively. Conclusions: The formation of coronal rain blobs is connected with the process of catastrophic cooling. The different acceleration of different coronal rain blobs might be due to the different values in the density ratio of blob to corona. All blobs leave trails, which might be a result of continuous cooling in their tails. Two movies attached to Fig. 1 are available in electronic form at http://www.aanda.org

  13. At the Heart of Blobs

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This artist's concept illustrates one possible answer to the puzzle of the 'giant galactic blobs.' These blobs (red), first identified about five years ago, are mammoth clouds of intensely glowing material that surround distant galaxies (white). Astronomers using visible-light telescopes can see the glow of the blobs, but they didn't know what provides the energy to light them up. NASA's Spitzer Space Telescope set its infrared eyes on one well-known blob located 11 billion light-years away, and discovered three tremendously bright galaxies, each shining with the light of more than one trillion Suns, headed toward each other.

    Spitzer also observed three other blobs in the same galactic neighborhood and found equally bright galaxies within them. One of these blobs is also known to contain galaxies merging together. The findings suggest that galactic mergers might be the mysterious source of blobs.

    If so, then one explanation for how mergers produce such large clouds of material is that they trigger intense bursts of star formation. This star formation would lead to exploding massive stars, or supernovae, which would then shoot gases outward in a phenomenon known as superwinds. Blobs produced in this fashion are illustrated in this artist's concept.

  14. Rapid change of blob structure in the outer scrape-off layer (SOL)

    NASA Astrophysics Data System (ADS)

    Cohen, R. H.

    2005-10-01

    Nonlinear structures (``blobs'') driven by the magnetic field curvature and highly elongated along the field lines may exist in the tokamak SOL.footnotetextS.I. Krasheninnikov. Phys. Lett. A 283, 368 (2001) The contact of the blob end with the divertor plate significantly affects the blob structure and velocity. However, the strong shearing of the flux-tube near the X-point makes impossible direct electrical contact of the blob in the upper SOL and the divertor, so that the sheath boundary condition (BC) has to be replaced by a BC imposed near the X point.footnotetextD. Ryutov, R.H. Cohen. Contr. Pl. Phys 44, 168 (2004) We show that, at larger distances from the separatrix, in the far SOL, the connection between the upper SOL and the divertor plate is re-established, and the sheath BC becomes again relevant. During the blob's outward radial motion, this event is reflected in a sudden change of its length, from the blob extending only to the X point to the blob extending down to the plate. Likewise, a blob initially existing only in the divertor leg becomes suddenly longer, and extends to the whole SOL.

  15. Fermi Blobs and the Symplectic Camel: A Geometric Picture of Quantum States

    NASA Astrophysics Data System (ADS)

    Gossona, Maurice A. De

    We have explained in previous work the correspondence between the standard squeezed coherent states of quantum mechanics, and quantum blobs, which are the smallest phase space units compatible with the uncertainty principle of quantum mechanics and having the symplectic group as a group of symmetries. In this work, we discuss the relation between quantum blobs and a certain level set (which we call "Fermi blob") introduced by Enrico Fermi in 1930. Fermi blobs allows us to extend our previous results not only to the excited states of the generalized harmonic oscillator in n dimensions, but also to arbitrary quadratic Hamiltonians. As is the case for quantum blobs, we can evaluate Fermi blobs using a topological notion, related to the uncertainty principle, the symplectic capacity of a phase space set. The definition of this notion is made possible by Gromov's symplectic non-squeezing theorem, nicknamed the "principle of the symplectic camel".

  16. Westward tilt of low-latitude plasma blobs as observed by the Swarm constellation

    NASA Astrophysics Data System (ADS)

    Park, Jaeheung; Lühr, Hermann; Michaelis, Ingo; Stolle, Claudia; Rauberg, Jan; Buchert, Stephan; Gill, Reine; Merayo, Jose M. G.; Brauer, Peter

    2015-04-01

    In this study we investigate the three-dimensional structure of low-latitude plasma blobs using multi-instrument and multisatellite observations of the Swarm constellation. During the early commissioning phase the Swarm satellites were flying at the same altitude with zonal separation of about 0.5∘ in geographic longitude. Electron density data from the three satellites constrain the blob morphology projected onto the horizontal plane. Magnetic field deflections around blobs, which originate from field-aligned currents near the irregularity boundaries, constrain the blob structure projected onto the plane perpendicular to the ambient magnetic field. As the two constraints are given for two noncoplanar surfaces, we can get information on the three-dimensional structure of blobs. Combined observation results suggest that blobs are contained within tilted shells of geomagnetic flux tubes, which are similar to the shell structure of equatorial plasma bubbles suggested by previous studies.

  17. VizieR Online Data Catalog: LAE candidates around bright Lyα blobs (Badescu+, 2017)

    NASA Astrophysics Data System (ADS)

    Badescu, T.; Yang, Y.; Bertoldi, F.; Zabludoff, A.; Karim, A.; Magnelli, B.

    2018-04-01

    We obtain narrowband images covering a total area of ~1°x0.5° around the two known Lyα blobs (Yang+ 2009ApJ...693.1579Y) using the Mosaic1.1 camera on the Kitt Peak National Observatory (KPNO) Mayall 4m telescope. In Figure 1 we show the areas covered by the National Optical Astronomical Observatory (NOAO) Deep Wide-field Survey (NDWFS) and the locations of our two pointings (hereafter Bootes1 and Bootes2) centered on 14:31:42.22,+35:31:19.9 and 14:28:54.08,+35:31:19.9. Observations were carried out on 2011 April 29 and 30, with exposure times of 7.3 and 6.0hr, respectively. The filter has a central wavelength of λc=4030Å and a bandwidth of ΔλFWHM=47Å, corresponding to the Lyα emission at z=2.3. Apart from the narrowband (NB) images, we also use NDWFS broadband BW-, R-, and I-band images for continuum estimation. (2 data files).

  18. At the Heart of Blobs Artist Concept

    NASA Image and Video Library

    2005-01-11

    This artist's concept illustrates one possible answer to the puzzle of the "giant galactic blobs." These blobs (red), first identified about five years ago, are mammoth clouds of intensely glowing material that surround distant galaxies (white). Astronomers using visible-light telescopes can see the glow of the blobs, but they didn't know what provides the energy to light them up. NASA's Spitzer Space Telescope set its infrared eyes on one well-known blob located 11 billion light-years away, and discovered three tremendously bright galaxies, each shining with the light of more than one trillion Suns, headed toward each other. Spitzer also observed three other blobs in the same galactic neighborhood and found equally bright galaxies within them. One of these blobs is also known to contain galaxies merging together. The findings suggest that galactic mergers might be the mysterious source of blobs. If so, then one explanation for how mergers produce such large clouds of material is that they trigger intense bursts of star formation. This star formation would lead to exploding massive stars, or supernovae, which would then shoot gases outward in a phenomenon known as superwinds. Blobs produced in this fashion are illustrated in this artist's concept. http://photojournal.jpl.nasa.gov/catalog/PIA07221

  19. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery

    PubMed Central

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-01-01

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564

  20. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.

    PubMed

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-03-26

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.

  1. Mysterious Blob Galaxies Revealed

    NASA Image and Video Library

    2005-01-11

    This image composite shows a giant galactic blob (red) and the three merging galaxies NASA's Spitzer Space Telescope discovered within it (yellow). Blobs are intensely glowing clouds of hot hydrogen gas that envelop faraway galaxies. They are about 10 times as large as the galaxies they surround. Visible-light images reveal the vast extent of blobs, but don't provide much information about their host galaxies. Using its heat-seeking infrared eyes, Spitzer was able to see the dusty galaxies tucked inside one well-known blob located 11 billion light-years away. The findings reveal three monstrously bright galaxies, trillions of times brighter than the Sun, in the process of merging together. Spitzer also observed three other blobs located in the same cosmic neighborhood, all of which were found to be glaringly bright. One of these blobs is also known to be a galactic merger, only between two galaxies instead of three. It remains to be seen whether the final two blobs studied also contain mergers. The Spitzer data were acquired by its multiband imaging photometer. The visible-light image was taken by the Blanco Telescope at the Cerro Tololo Inter-American Observatory, Chile. http://photojournal.jpl.nasa.gov/catalog/PIA07220

  2. Eruption of a plasma blob, associated M-class flare, and large-scale extreme-ultraviolet wave observed by SDO

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Manoharan, P. K.

    2013-05-01

    We present a multiwavelength study of the formation and ejection of a plasma blob and associated extreme ultraviolet (EUV) waves in active region (AR) NOAA 11176, observed by SDO/AIA and STEREO on 25 March 2011. The EUV images observed with the AIA instrument clearly show the formation and ejection of a plasma blob from the lower atmosphere of the Sun at ~9 min prior to the onset of the M1.0 flare. This onset of the M-class flare happened at the site of the blob formation, while the blob was rising in a parabolic path with an average speed of ~300 km s. The blob also showed twisting and de-twisting motion in the lower corona, and the blob speed varied from ~10-540 km s. The faster and slower EUV wavefronts were observed in front of the plasma blob during its impulsive acceleration phase. The faster EUV wave propagated with a speed of ~785 to 1020 km s, whereas the slower wavefront speed varied in between ~245 and 465 km s. The timing and speed of the faster wave match the shock speed estimated from the drift rate of the associated type II radio burst. The faster wave experiences a reflection by the nearby AR NOAA 11177. In addition, secondary waves were observed (only in the 171 Å channel), when the primary fast wave and plasma blob impacted the funnel-shaped coronal loops. The Helioseismic Magnetic Imager (HMI) magnetograms revealed the continuous emergence of new magnetic flux along with shear flows at the site of the blob formation. It is inferred that the emergence of twisted magnetic fields in the form of arch-filaments/"anemone-type" loops is the likely cause for the plasma blob formation and associated eruption along with the triggering of M-class flare. Furthermore, the faster EUV wave formed ahead of the blob shows the signature of fast-mode MHD wave, whereas the slower wave seems to be generated by the field line compression by the plasma blob. The secondary wave trains originated from the funnel-shaped loops are probably the fast magnetoacoustic waves. Three movies are available in electronic form at http://www.aanda.org

  3. Experimental Evidence of Weak Excluded Volume Effects for Nanochannel Confined DNA

    NASA Astrophysics Data System (ADS)

    Gupta, Damini; Miller, Jeremy J.; Muralidhar, Abhiram; Mahshid, Sara; Reisner, Walter; Dorfman, Kevin D.

    In the classical de Gennes picture of weak polymer nanochannel confinement, the polymer contour is envisioned as divided into a series of isometric blobs. Strong excluded volume interactions are present both within a blob and between blobs. In contrast, for semiflexible polymers like DNA, excluded volume interactions are of borderline strength within a blob but appreciable between blobs, giving rise to a chain description consisting of a string of anisometric blobs. We present experimental validation of this subtle effect of excluded volume for DNA nanochannel confinement by performing measurements of variance in chain extension of T4 DNA molecules as a function of effective nanochannel size (305-453 nm). Additionally, we show an approach to systematically reduce the effect of molecular weight dispersity of DNA samples, a typical experimental artifact, by combining confinement spectroscopy with simulations.

  4. Pre-sheath density drop induced by ion-neutral friction along plasma blobs and implications for blob velocities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furno, I.; Chabloz, V.; Fasoli, A.

    2014-01-15

    The pre-sheath density drop along the magnetic field in field-aligned, radially propagating plasma blobs is investigated in the TORPEX toroidal experiment [Fasoli et al., Plasma Phys. Controlled Fusion 52, 124020 (2010)]. Using Langmuir probes precisely aligned along the magnetic field, we measure the density n{sub se} at a poloidal limiter, where blobs are connected, and the upstream density n{sub 0} at a location half way to the other end of the blobs. The pre-sheath density drop n{sub se}/n{sub 0} is then computed and its dependence upon the neutral background gas pressure is studied. At low neutral gas pressures, the pre-sheathmore » density drop is ≈0.4, close to the value of 0.5 expected in the collisionless case. In qualitative agreement with a simple model, this value decreases with increasing gas pressure. No significant dependence of the density drop upon the radial distance into the limiter shadow is observed. The effect of reduced blob density near the limiter on the blob radial velocity is measured and compared with predictions from a blob speed-versus-size scaling law [Theiler et al., Phys. Rev. Lett. 103, 065001 (2009)].« less

  5. Segmentation of heterogeneous blob objects through voting and level set formulation

    PubMed Central

    Chang, Hang; Yang, Qing; Parvin, Bahram

    2009-01-01

    Blob-like structures occur often in nature, where they aid in cueing and the pre-attentive process. These structures often overlap, form perceptual boundaries, and are heterogeneous in shape, size, and intensity. In this paper, voting, Voronoi tessellation, and level set methods are combined to delineate blob-like structures. Voting and subsequent Voronoi tessellation provide the initial condition and the boundary constraints for each blob, while curve evolution through level set formulation provides refined segmentation of each blob within the Voronoi region. The paper concludes with the application of the proposed method to a dataset produced from cell based fluorescence assays and stellar data. PMID:19774202

  6. Initial spatio-temporal domain expansion of the Modelfest database

    NASA Astrophysics Data System (ADS)

    Carney, Thom; Mozaffari, Sahar; Sun, Sean; Johnson, Ryan; Shirvastava, Sharona; Shen, Priscilla; Ly, Emma

    2013-03-01

    The first Modelfest group publication appeared in the SPIE Human Vision and Electronic Imaging conference proceedings in 1999. "One of the group's goals is to develop a public database of test images with threshold data from multiple laboratories for designing and testing HVS (Human Vision Models)." After extended discussions the group selected a set of 45 static images thought to best meet that goal and collected psychophysical detection data which is available on the WEB and presented in the 2000 SPIE conference proceedings. Several groups have used these datasets to test spatial modeling ideas. Further discussions led to the preliminary stimulus specification for extending the database into the temporal domain which was published in the 2002 conference proceeding. After a hiatus of 12 years, some of us have collected spatio-temporal thresholds on an expanded stimulus set of 41 video clips; the original specification included 35 clips. The principal change involved adding one additional spatial pattern beyond the three originally specified. The stimuli consisted of 4 spatial patterns, Gaussian Blob, 4 c/d Gabor patch, 11.3 c/d Gabor patch and a 2D white noise patch. Across conditions the patterns were temporally modulated over a range of approximately 0-25 Hz as well as temporal edge and pulse modulation conditions. The display and data collection specifications were as specified by the Modelfest groups in the 2002 conference proceedings. To date seven subjects have participated in this phase of the data collection effort, one of which also participated in the first phase of Modelfest. Three of the spatio-temporal stimuli were identical to conditions in the original static dataset. Small differences in the thresholds were evident and may point to a stimulus limitation. The temporal CSF peaked between 4 and 8 Hz for the 0 c/d (Gaussian blob) and 4 c/d patterns. The 4 c/d and 11.3 c/d Gabor temporal CSF was low pass while the 0 c/d pattern was band pass. This preliminary expansion of the Modelfest dataset needs the participation of additional laboratories to evaluate the impact of different methods on threshold estimates and increase the subject base. We eagerly await the addition of new data from interested researchers. It remains to be seen how accurately general HVS models will predict thresholds across both Modelfest datasets.

  7. The formation of blobs from a pure interchange process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, P., E-mail: pzhu@ustc.edu.cn; Department of Engineering Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706; Sovinec, C. R.

    2015-02-15

    In this work, we focus on examining a pure interchange process in a shear-less slab configuration as a prototype mechanism for blob formation. We employ full magnetohydrodynamic simulations to demonstrate that the blob-like structures can emerge through the nonlinear development of a pure interchange instability originating from a pedestal-like transition region. In the early nonlinear stage, filamentary structures develop and extend in the direction of the effective gravity. The blob-like structures appear when the radially extending filaments break off and disconnect from the core plasma. The morphology and the dynamics of these filaments and blobs vary dramatically with a sensitivemore » dependence on the dissipation mechanisms in the system and the initial perturbation. Despite the complexity in morphology and dynamics, the nature of the entire blob formation process in the shear-less slab configuration remains strictly interchange without involving any change in magnetic topology.« less

  8. Application of snakes and dynamic programming optimisation technique in modeling of buildings in informal settlement areas

    NASA Astrophysics Data System (ADS)

    Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.

    This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.

  9. [Comparison of tone burst evoked auditory brainstem responses with different filter settings for referral infants after hearing screening].

    PubMed

    Diao, Wen-wen; Ni, Dao-feng; Li, Feng-rong; Shang, Ying-ying

    2011-03-01

    Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency, figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Thresholds with filter settings of 30 - 1500 Hz and 30 - 3000 Hz in children aged 2 - 33 months were recorded by click, tone burst ABR. A total of 18 patients (8 male/10 female), 22 ears were included. The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0.5 kHz and 2.0 kHz (t values were 2.238 and 2.217, P < 0.05), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.

  10. a Case Study of Plasma Blob Associated with Plasma Bubble in Low Latitude Region in the Brazilian Sector Using All-Sky Images and DMSP Satellite

    NASA Astrophysics Data System (ADS)

    Tardelli, F. C.; Abalde, J. R.; Pimenta, A. A.; Kavutarapu, V.; Tardelli, A.

    2016-12-01

    Using optical techniques and satellite data a plasma blob case was observed on February 23, 2007, in São José dos Campos (SJC) (23.21°S, 45.86°O; dip. Lat. 17.6°S) in the Brazilian sector. This is the first observation of plasma blob in SJC region using data from optical techniques and satellite measurements. The plasma blob is the enhancements in plasma density by a factor of 2 or more above background plasma. Simultaneous all-sky images were used to map the spatial extent of plasma blob. DMSP satellite data were used to confirm the enhancements in plasma density in the ionosphere, which provides important parameters of the ionospheric behavior during the event. During the night of present study, the plasma blob was associated with a plasma bubble and the average zonal drift velocities are 61±6 m-s and 74±8 m-s, respectively. The average North/South and East/West extension of the blob were 591 km and 328 km, respectively. Furthermore, the average longitudinal drift velocity was 85±13 m-s. In this work plasma density is found to be enhanced by a factor of 2 compared to the background plasma. We report for the first time plasma blob in SJC at low latitude region associated with plasma bubble and present important features of their behavior.

  11. Coronal rain in magnetic bipolar weak fields

    NASA Astrophysics Data System (ADS)

    Xia, C.; Keppens, R.; Fang, X.

    2017-07-01

    Aims: We intend to investigate the underlying physics for the coronal rain phenomenon in a representative bipolar magnetic field, including the formation and the dynamics of coronal rain blobs. Methods: With the MPI-AMRVAC code, we performed three dimensional radiative magnetohydrodynamic (MHD) simulation with strong heating localized on footpoints of magnetic loops after a relaxation to quiet solar atmosphere. Results: Progressive cooling and in-situ condensation starts at the loop top due to radiative thermal instability. The first large-scale condensation on the loop top suffers Rayleigh-Taylor instability and becomes fragmented into smaller blobs. The blobs fall vertically dragging magnetic loops until they reach low-β regions and start to fall along the loops from loop top to loop footpoints. A statistic study of the coronal rain blobs finds that small blobs with masses of less than 1010 g dominate the population. When blobs fall to lower regions along the magnetic loops, they are stretched and develop a non-uniform velocity pattern with an anti-parallel shearing pattern seen to develop along the central axis of the blobs. Synthetic images of simulated coronal rain with Solar Dynamics Observatory Atmospheric Imaging Assembly well resemble real observations presenting dark falling clumps in hot channels and bright rain blobs in a cool channel. We also find density inhomogeneities during a coronal rain "shower", which reflects the observed multi-stranded nature of coronal rain. Movies associated to Figs. 3 and 7 are available at http://www.aanda.org

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, R.; Soler, R.; Terradas, J.

    Observations of active regions and limb prominences often show cold, dense blobs descending with an acceleration smaller than that of free fall. The dynamics of these condensations falling in the solar corona is investigated in this paper using a simple fully ionized plasma model. We find that the presence of a heavy condensation gives rise to a dynamical rearrangement of the coronal pressure that results in the formation of a large pressure gradient that opposes gravity. Eventually this pressure gradient becomes so large that the blob acceleration vanishes or even points upward. Then, the blob descent is characterized by anmore » initial acceleration phase followed by an essentially constant velocity phase. These two stages can be identified in published time-distance diagrams of coronal rain events. Both the duration of the first stage and the velocity attained by the blob increase for larger values of the ratio of blob to coronal density, for larger blob mass, and for smaller coronal temperature. Dense blobs are characterized by a detectable density growth (up to 60% in our calculations) and by a steepening of the density in their lower part, that could lead to the formation of a shock. They also emit sound waves that could be detected as small intensity changes with periods of the order of 100 s and lasting between a few and about 10 periods. Finally, the curvature of falling paths with large radii is only relevant when a very dense blob falls along inclined magnetic field lines.« less

  13. Pore-scale Analysis of the effects of Contact Angle Hysteresis on Blob Mobilization in a Pore Doublet

    NASA Astrophysics Data System (ADS)

    Hsu, Shao-Yiu; Glantz, Roland; Hilpert, Markus

    2011-11-01

    The mobilization of residual oil blobs in porous media is of major interest to the petroleum industry. We studied the Jamin effect, which hampers the blob mobilization, experimentally in a pore doublet model and explain the Jamin effect through contact angle hysteresis. A liquid blob was trapped in one of the tubes of the pore doublet model and then subjected to different pressure gradients. We measured the contact angles (in 2D and 3D) as well as the mean curvatures of the blob. Due to gravity effects and hysteresis, the contact angles of the blob were initially (zero pressure gradient) non-uniform and exhibited a pronounced altitude dependence. As the pressure gradient was increased, the contact angles became more uniform and the altitude dependence of the contact angle decreased. At the same time, the mean curvature of the drainage interface increased, and the mean curvature of the imbibition interface decreased. The pressure drops across the pore model, which we inferred with our theory from the measured contact angles and mean curvatures, were in line with the directly measured pressure data. We not only show that a trapped blob can sustain a finite pressure gradient but also develop methods to measure the contact angles and mean curvatures in 3D.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, X.; Xia, C.; Keppens, R.

    We extend our earlier multidimensional, magnetohydrodynamic simulations of coronal rain occurring in magnetic arcades with higher resolution, grid-adaptive computations covering a much longer (>6 hr) time span. We quantify how blob-like condensations forming in situ grow along and across field lines and show that rain showers can occur in limit cycles, here demonstrated for the first time in 2.5D setups. We discuss dynamical, multi-dimensional aspects of the rebound shocks generated by the siphon inflows and quantify the thermodynamics of a prominence–corona transition-region-like structure surrounding the blobs. We point out the correlation between condensation rates and the cross-sectional size of loopmore » systems where catastrophic cooling takes place. We also study the variations of the typical number density, kinetic energy, and temperature while blobs descend, impact, and sink into the transition region. In addition, we explain the mechanisms leading to concurrent upflows while the blobs descend. As a result, there are plenty of shear flows generated with relative velocity difference around 80 km s{sup −1} in our simulations. These shear flows are siphon flows set up by multiple blob dynamics and they in turn affect the deformation of the falling blobs. In particular, we show how shear flows can break apart blobs into smaller fragments, within minutes.« less

  15. Galaxies Coming of Age in Cosmic Blobs

    NASA Astrophysics Data System (ADS)

    2009-06-01

    The "coming of age" of galaxies and black holes has been pinpointed, thanks to new data from NASA's Chandra X-ray Observatory and other telescopes. This discovery helps resolve the true nature of gigantic blobs of gas observed around very young galaxies. About a decade ago, astronomers discovered immense reservoirs of hydrogen gas -- which they named "blobs" - while conducting surveys of young distant galaxies. The blobs are glowing brightly in optical light, but the source of immense energy required to power this glow and the nature of these objects were unclear. A long observation from Chandra has identified the source of this energy for the first time. The X-ray data show that a significant source of power within these colossal structures is from growing supermassive black holes partially obscured by dense layers of dust and gas. The fireworks of star formation in galaxies are also seen to play an important role, thanks to Spitzer Space Telescope and ground-based observations. "For ten years the secrets of the blobs had been buried from view, but now we've uncovered their power source," said James Geach of Durham University in the United Kingdom, who led the study. "Now we can settle some important arguments about what role they played in the original construction of galaxies and black holes." Galaxies are believed to form when gas flows inwards under the pull of gravity and cools by emitting radiation. This process should stop when the gas is heated by radiation and outflows from galaxies and their black holes. Blobs could be a sign of this first stage, or of the second. Based on the new data and theoretical arguments, Geach and his colleagues show that heating of gas by growing supermassive black holes and bursts of star formation, rather than cooling of gas, most likely powers the blobs. The implication is that blobs represent a stage when the galaxies and black holes are just starting to switch off their rapid growth because of these heating processes. This is a crucial stage of the evolution of galaxies and black holes - known as "feedback" - and one that astronomers have long been trying to understand. "We're seeing signs that the galaxies and black holes inside these blobs are coming of age and are now pushing back on the infalling gas to prevent further growth," said coauthor Bret Lehmer, also of Durham. "Massive galaxies must go through a stage like this or they would form too many stars and so end up ridiculously large by the present day." Chandra and a collection of other telescopes including Spitzer have observed 29 blobs in one large field in the sky dubbed "SSA22." These blobs, which are several hundred thousand light years across, are seen when the Universe is only about two billion years old, or roughly 15% of its current age. X-ray Chandra X-ray Image of Lyman Alpha Blobs In five of these blobs, the Chandra data revealed the telltale signature of growing supermassive black holes - a point-like source with luminous X- ray emission. These giant black holes are thought to reside at the centers of most galaxies today, including our own. Another three of the blobs in this field show possible evidence for such black holes. Based on further observations, including Spitzer data, the research team was able to determine that several of these galaxies are also dominated by remarkable levels of star formation. The radiation and powerful outflows from these black holes and bursts of star formation are, according to calculations, powerful enough to light up the hydrogen gas in the blobs they inhabit. In the cases where the signatures of these black holes were not detected, the blobs are generally fainter. The authors show that black holes bright enough to power these blobs would be too dim to be detected given the length of the Chandra observations. People Who Read This Also Read... Milky Way's Super-efficient Particle Accelerators Caught in The Act NASA Announces 2009 Astronomy and Astrophysics Fellows Cosmic Heavyweights in Free-for-all Ghost Remains After Black Hole Eruption Besides explaining the power source of the blobs, these results help explain their future. Under the heating scenario, the gas in the blobs will not cool down to form stars but will add to the hot gas found between galaxies. SSA22 itself could evolve into a massive galaxy cluster. "In the beginning the blobs would have fed their galaxies, but what we see now are more like leftovers," said Geach. "This means we'll have to look even further back in time to catch galaxies and black holes in the act of forming from blobs." These results will appear in the July 10 issue of The Astrophysical Journal. NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations from Cambridge, Mass.

  16. The reconstruction algorithm used for [68Ga]PSMA-HBED-CC PET/CT reconstruction significantly influences the number of detected lymph node metastases and coeliac ganglia.

    PubMed

    Krohn, Thomas; Birmes, Anita; Winz, Oliver H; Drude, Natascha I; Mottaghy, Felix M; Behrendt, Florian F; Verburg, Frederik A

    2017-04-01

    To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on [ 68 Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one [ 68 Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of [ 68 Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information.

  17. Retention of pediatric bag-mask ventilation efficacy skill by inexperienced medical student resuscitators using standard bag-mask ventilation masks, pocket masks, and blob masks.

    PubMed

    Kitagawa, Kory H; Nakamura, Nina M; Yamamoto, Loren

    2006-03-01

    To measure the ventilation efficacy with three single-sized mask types on infant and child manikin models. Medical students were recruited as study subjects inasmuch as they are inexperienced resuscitators. They were taught proper bag-mask ventilation (BMV) according to the American Heart Association guidelines on an infant and a child manikin. Subjects completed a BMV attempt successfully using the adult standard mask (to simulate the uncertainty of mask selection), pocket mask, and blob mask. Each attempt consisted of 5 ventilations assessed by chest rise of the manikin. Study subjects were asked which mask was easiest to use. Four to six weeks later, subjects repeated the procedure with no instructions (to simulate an emergency BMV encounter without immediate pre-encounter teaching). Forty-six volunteer subjects were studied. During the first attempt, subjects preferred the standard and blob masks over the pocket mask. For the second attempt, the blob mask was preferred over the standard mask, and few liked the pocket mask. Using the standard, blob, and pocket masks on the child manikin, 39, 42, and 20 subjects, respectively, were able to achieve adequate ventilation. Using the standard, blob, and pocket masks on the infant manikin, 45, 45, and 11 subjects, respectively, were able to achieve adequate ventilation. Both the standard and blob masks are more effective than the pocket mask at achieving adequate ventilation on infant and child manikins in this group of inexperienced medical student resuscitators, who most often preferred the blob mask.

  18. Electromagnetic effects on dynamics of high-beta filamentary structures

    DOE PAGES

    Lee, Wonjae; Umansky, Maxim V.; Angus, J. R.; ...

    2015-01-12

    The impacts of the electromagnetic effects on blob dynamics are considered. Electromagnetic BOUT++ simulations on seeded high-beta blobs demonstrate that inhomogeneity of magnetic curvature or plasma pressure along the filament leads to bending of the blob filaments and the magnetic field lines due to increased propagation time of plasma current (Alfvén time). The bending motion can enhance heat exchange between the plasma facing materials and the inner SOL region. The effects of sheath boundary conditions on the part of the blob away from the boundary are also diminished by the increased Alfvén time. Using linear analysis and the BOUT++ simulation,more » it is found that electromagnetic effects in high temperature and high density plasmas reduce the growth rate of resistive drift wave turbulence when resistivity drops below some certain value. Lastly, in the course of blobs motion in the SOL its temperature is reduced, which leads to enhancement of resistive effects, so the blob can switch from electromagnetic to electrostatic regime, where resistive drift wave turbulence become important.« less

  19. Blob-enhanced reconstruction technique

    NASA Astrophysics Data System (ADS)

    Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso

    2016-09-01

    A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the signal-to-noise ratio in the reconstructed flow field and a higher value of the correlation factor in the velocity measurements with respect to the volume to which the particles are not replaced.

  20. Modeling of large amplitude plasma blobs in three-dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angus, Justin R.; Umansky, Maxim V.

    2014-01-15

    Fluctuations in fusion boundary and similar plasmas often have the form of filamentary structures, or blobs, that convectively propagate radially. This may lead to the degradation of plasma facing components as well as plasma confinement. Theoretical analysis of plasma blobs usually takes advantage of the so-called Boussinesq approximation of the potential vorticity equation, which greatly simplifies the treatment analytically and numerically. This approximation is only strictly justified when the blob density amplitude is small with respect to that of the background plasma. However, this is not the case for typical plasma blobs in the far scrape-off layer region, where themore » background density is small compared to that of the blob, and results obtained based on the Boussinesq approximation are questionable. In this report, the solution of the full vorticity equation, without the usual Boussinesq approximation, is proposed via a novel numerical approach. The method is used to solve for the evolution of 2D and 3D plasma blobs in a regime where the Boussinesq approximation is not valid. The Boussinesq solution under predicts the cross field transport in 2D. However, in 3D, for parameters typical of current tokamaks, the disparity between the radial cross field transport from the Boussinesq approximation and full solution is virtually non-existent due to the effects of the drift wave instability.« less

  1. CT acquisition technique and quantitative analysis of the lung parenchyma: variability and corrections

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Leader, J. K.; Coxson, Harvey O.; Scuirba, Frank C.; Fuhrman, Carl R.; Balkan, Arzu; Weissfeld, Joel L.; Maitz, Glenn S.; Gur, David

    2006-03-01

    The fraction of lung voxels below a pixel value "cut-off" has been correlated with pathologic estimates of emphysema. We performed a "standard" quantitative CT (QCT) lung analysis using a -950 HU cut-off to determine the volume fraction of emphysema (below the cut-off) and a "corrected" QCT analysis after removing small group (5 and 10 pixels) of connected pixels ("blobs") below the cut-off. CT examinations two dataset of 15 subjects each with a range of visible emphysema and pulmonary obstruction were acquired at "low-dose and conventional dose reconstructed using a high-spatial frequency kernel at 2.5 mm section thickness for the same subject. The "blob" size (i.e., connected-pixels) removed was inversely related to the computed fraction of emphysema. The slopes of emphysema fraction versus blob size were 0.013, 0.009, and 0.005 for subjects with both no emphysema and no pulmonary obstruction, moderate emphysema and pulmonary obstruction, and severe emphysema and severe pulmonary obstruction, respectively. The slopes of emphysema fraction versus blob size were 0.008 and 0.006 for low-dose and conventional CT examinations, respectively. The small blobs of pixels removed are most likely CT image artifacts and do not represent actual emphysema. The magnitude of the blob correction was appropriately associated with COPD severity. The blob correction appears to be applicable to QCT analysis in low-dose and conventional CT exams.

  2. Comparison of edge detection techniques for M7 subtype Leukemic cell in terms of noise filters and threshold value

    NASA Astrophysics Data System (ADS)

    Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande

    2017-11-01

    This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.

  3. Accurate coarse-grained models for mixtures of colloids and linear polymers under good-solvent conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Adamo, Giuseppe, E-mail: giuseppe.dadamo@sissa.it; Pelissetto, Andrea, E-mail: andrea.pelissetto@roma1.infn.it; Pierleoni, Carlo, E-mail: carlo.pierleoni@aquila.infn.it

    2014-12-28

    A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmannmore » inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.« less

  4. SDSS IV MaNGA: Discovery of an Hα Blob Associated with a Dry Galaxy Pair—Ejected Gas or a “Dark” Galaxy Candidate?

    NASA Astrophysics Data System (ADS)

    Lin, Lihwai; Lin, Jing-Hua; Hsu, Chin-Hao; Fu, Hai; Huang, Song; Sánchez, Sebastián F.; Gwyn, Stephen; Gelfand, Joseph D.; Cheung, Edmond; Masters, Karen; Peirani, Sébastien; Rujopakarn, Wiphu; Stark, David V.; Belfiore, Francesco; Bothwell, M. S.; Bundy, Kevin; Hagen, Alex; Hao, Lei; Huang, Shan; Law, David; Li, Cheng; Lintott, Chris; Maiolino, Roberto; Roman-Lopes, Alexandre; Wang, Wei-Hao; Xiao, Ting; Yuan, Fangting; Bizyaev, Dmitry; Malanushenko, Elena; Drory, Niv; Fernández-Trincado, J. G.; Pace, Zach; Pan, Kaike; Thomas, Daniel

    2017-03-01

    We report the discovery of a mysterious giant Hα blob that is ˜8 kpc away from the main MaNGA target 1-24145, one component of a dry galaxy merger, and has been identified in the first-year SDSS-IV MaNGA data. The size of the Hα blob is ˜3-4 kpc in radius, and the Hα distribution is centrally concentrated. However, there is no optical continuum counterpart in the deep broadband images reaching ˜26.9 mag arcsec-2 in surface brightness. We estimate that the masses of the ionized and cold gases are 3.3× {10}5 {M}⊙ and < 1.3× {10}9 {M}⊙ , respectively. The emission-line ratios indicate that the Hα blob is photoionized by a combination of massive young stars and AGNs. Furthermore, the ionization line ratio decreases from MaNGA 1-24145 to the Hα blob, suggesting that the primary ionizing source may come from MaNGA 1-24145, likely a low-activity AGN. Possible explanations for this Hα blob include the AGN outflow, the gas remnant being tidally or ram-pressure stripped from MaNGA 1-24145, or an extremely low surface brightness galaxy. However, the stripping scenario is less favored according to galaxy merger simulations and the morphology of the Hα blob. With the current data, we cannot distinguish whether this Hα blob is ejected gas due to a past AGN outburst, or a special category of “ultra-diffuse galaxy” interacting with MaNGA 1-24145 that further induces the gas inflow to fuel the AGN in MaNGA 1-24145.

  5. Experiments on the Propagation of Plasma Filaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Noam; Egedal, Jan; Fox, Will

    2008-07-04

    We investigate experimentally the motion and structure of isolated plasma filaments propagating through neutral gas. Plasma filaments, or 'blobs,' arise from turbulent fluctuations in a range of plasmas. Our experimental geometry is toroidally symmetric, and the blobs expand to a larger major radius under the influence of a vertical electric field. The electric field, which is caused by {nabla}B and curvature drifts in a 1/R magnetic field, is limited by collisional damping on the neutral gas. The blob's electrostatic potential structure and the resulting ExB flow field give rise to a vortex pair and a mushroom shape, which are consistentmore » with nonlinear plasma simulations. We observe experimentally this characteristic mushroom shape for the first time. We also find that the blob propagation velocity is inversely proportional to the neutral density and decreases with time as the blob cools.« less

  6. A directed matched filtering algorithm (DMF) for discriminating hydrothermal alteration zones using the ASTER remote sensing data

    NASA Astrophysics Data System (ADS)

    Fereydooni, H.; Mojeddifar, S.

    2017-09-01

    This study introduced a different procedure to implement matched filtering algorithm (MF) on the ASTER images to obtain the distribution map of alteration minerals in the northwestern part of the Kerman Cenozoic Magmatic Arc (KCMA). This region contains many areas with porphyry copper mineralization such as Meiduk, Abdar, Kader, Godekolvari, Iju, Serenu, Chahfiroozeh and Parkam. Also argillization, sericitization and propylitization are the most common types of hydrothermal alteration in the area. Matched filtering results were provided for alteration minerals with a matched filtering score, called MF image. To identify the pixels which contain only one material (endmember), an appropriate threshold value should be used to the MF image. The chosen threshold classifies a MF image into background and target pixels. This article argues that the current thresholding process (the choice of a threshold) shows misclassification for MF image. To address the issue, this paper introduced the directed matched filtering (DMF) algorithm in which a spectral signature-based filter (SSF) was used instead of the thresholding process. SSF is a user-defined rule package which contains numeral descriptions about the spectral reflectance of alteration minerals. On the other hand, the spectral bands are defined by an upper and lower limit in SSF filter for each alteration minerals. SSF was developed for chlorite, kaolinite, alunite, and muscovite minerals to map alteration zones. The validation proved that, at first: selecting a contiguous range of MF values could not identify desirable results, second: unexpectedly, considerable frequency of pure pixels was observed in the MF scores less than threshold value. Also, the comparison between DMF results and field studies showed an accuracy of 88.51%.

  7. On the performance of digital phase locked loops in the threshold region

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    Extended Kalman filter algorithms are used to obtain a digital phase lock loop structure for demodulation of angle modulated signals. It is shown that the error variance equations obtained directly from this structure enable one to predict threshold if one retains higher frequency terms. This is in sharp contrast to the similar analysis of the analog phase lock loop, where the higher frequency terms are filtered out because of the low pass filter in the loop. Results are compared to actual simulation results and threshold region results obtained previously.

  8. Adaptive gain and filtering circuit for a sound reproduction system

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor)

    1998-01-01

    Adaptive compressive gain and level dependent spectral shaping circuitry for a hearing aid include a microphone to produce an input signal and a plurality of channels connected to a common circuit output. Each channel has a preset frequency response. Each channel includes a filter with a preset frequency response to receive the input signal and to produce a filtered signal, a channel amplifier to amplify the filtered signal to produce a channel output signal, a threshold register to establish a channel threshold level, and a gain circuit. The gain circuit increases the gain of the channel amplifier when the channel output signal falls below the channel threshold level and decreases the gain of the channel amplifier when the channel output signal rises above the channel threshold level. A transducer produces sound in response to the signal passed by the common circuit output.

  9. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    PubMed

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-05-28

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, X.; Xia, C.; Keppens, R.

    We present the first multidimensional, magnetohydrodynamic simulations that capture the initial formation and long-term sustainment of the enigmatic coronal rain phenomenon. We demonstrate how thermal instability can induce a spectacular display of in situ forming blob-like condensations which then start their intimate ballet on top of initially linear force-free arcades. Our magnetic arcades host a chromospheric, transition region, and coronal plasma. Following coronal rain dynamics for over 80 minutes of physical time, we collect enough statistics to quantify blob widths, lengths, velocity distributions, and other characteristics which directly match modern observational knowledge. Our virtual coronal rain displays the deformation ofmore » blobs into V-shaped features, interactions of blobs due to mostly pressure-mediated levitations, and gives the first views of blobs that evaporate in situ or are siphoned over the apex of the background arcade. Our simulations pave the way for systematic surveys of coronal rain showers in true multidimensional settings to connect parameterized heating prescriptions with rain statistics, ultimately allowing us to quantify the coronal heating input.« less

  11. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question:more » Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms« less

  12. Multidimensional Modeling of Coronal Rain Dynamics

    NASA Astrophysics Data System (ADS)

    Fang, X.; Xia, C.; Keppens, R.

    2013-07-01

    We present the first multidimensional, magnetohydrodynamic simulations that capture the initial formation and long-term sustainment of the enigmatic coronal rain phenomenon. We demonstrate how thermal instability can induce a spectacular display of in situ forming blob-like condensations which then start their intimate ballet on top of initially linear force-free arcades. Our magnetic arcades host a chromospheric, transition region, and coronal plasma. Following coronal rain dynamics for over 80 minutes of physical time, we collect enough statistics to quantify blob widths, lengths, velocity distributions, and other characteristics which directly match modern observational knowledge. Our virtual coronal rain displays the deformation of blobs into V-shaped features, interactions of blobs due to mostly pressure-mediated levitations, and gives the first views of blobs that evaporate in situ or are siphoned over the apex of the background arcade. Our simulations pave the way for systematic surveys of coronal rain showers in true multidimensional settings to connect parameterized heating prescriptions with rain statistics, ultimately allowing us to quantify the coronal heating input.

  13. Possible Overlaps Between Blobs, Grism Apertures, and Dithers

    NASA Astrophysics Data System (ADS)

    Ryan, R. E.; McCullough, P. R.

    2017-06-01

    We present a investigation into possible overlaps between the known IR blobs with the grism aperture reference positions and the IR dither patterns. Each aperture was designed to place the science target (e.g. a specific star) on a cosmetically clean area of the IR detector. Similarly, the dither patterns were designed to mitigate cosmetic defects by rarely (or ideally never) placing such targets on known defects. Because blobs accumulate with time, the originally defined apertures and dither patterns may no longer accomplish their goals, it is important to reverify these combinations. We find two potential overlaps between the blob, aperture, and dither combinations, but do not recommend any changes to the current suite of aperture references positions and/or dither patterns for two reasons. First, one of the overlaps occurs with a dither/aperture combination that is seldom used for high-value science operations, but rather more common for wide-field surveys/mosaics. Second, the other overlap is 8.7 pix from a blob that has a fiducial radius of 10 pix, which already represents a very conservative distance. We conclude that a similar analysis should be repeated as new blobs occur, to continue to ensure ideal operations for high-value science targets. The purpose of this report is to document the analysis in order to facilitate its repetition in the future.

  14. Cross-field transport by instabilities and blobs in a magnetized toroidal plasma.

    PubMed

    Podestà, M; Fasoli, A; Labit, B; Furno, I; Ricci, P; Poli, F M; Diallo, A; Müller, S H; Theiler, C

    2008-07-25

    The mechanisms for anomalous transport across the magnetic field are investigated in a toroidal magnetized plasma. The role of plasma instabilities and macroscopic density structures (blobs) is discussed. Examples from a scenario with open magnetic field lines are shown. A transition from a main plasma region into a loss region is reproduced. In the main plasma, which includes particle and heat source locations, the transport is dominated by the fluctuation-induced particle and heat flux associated with a plasma instability. On the low-field side, the cross-field transport is ascribed to the intermittent ejection of macroscopic blobs propagating toward the outer wall. It is shown that instabilities and blobs represent fundamentally different mechanisms for cross-field transport.

  15. Web Server for Peak Detection, Baseline Correction, and Alignment in Two-Dimensional Gas Chromatography Mass Spectrometry-Based Metabolomics Data.

    PubMed

    Tian, Tze-Feng; Wang, San-Yuan; Kuo, Tien-Chueh; Tan, Cheng-En; Chen, Guan-Yuan; Kuo, Ching-Hua; Chen, Chi-Hsin Sally; Chan, Chang-Chuan; Lin, Olivia A; Tseng, Y Jane

    2016-11-01

    Two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) is superior for chromatographic separation and provides great sensitivity for complex biological fluid analysis in metabolomics. However, GC×GC/TOF-MS data processing is currently limited to vendor software and typically requires several preprocessing steps. In this work, we implement a web-based platform, which we call GC 2 MS, to facilitate the application of recent advances in GC×GC/TOF-MS, especially for metabolomics studies. The core processing workflow of GC 2 MS consists of blob/peak detection, baseline correction, and blob alignment. GC 2 MS treats GC×GC/TOF-MS data as pictures and clusters the pixels as blobs according to the brightness of each pixel to generate a blob table. GC 2 MS then aligns the blobs of two GC×GC/TOF-MS data sets according to their distance and similarity. The blob distance and similarity are the Euclidean distance of the first and second retention times of two blobs and the Pearson's correlation coefficient of the two mass spectra, respectively. GC 2 MS also directly corrects the raw data baseline. The analytical performance of GC 2 MS was evaluated using GC×GC/TOF-MS data sets of Angelica sinensis compounds acquired under different experimental conditions and of human plasma samples. The results show that GC 2 MS is an easy-to-use tool for detecting peaks and correcting baselines, and GC 2 MS is able to align GC×GC/TOF-MS data sets acquired under different experimental conditions. GC 2 MS is freely accessible at http://gc2ms.web.cmdm.tw .

  16. The role of color and attention-to-color in mirror-symmetry perception.

    PubMed

    Gheorghiu, Elena; Kingdom, Frederick A A; Remkes, Aaron; Li, Hyung-Chul O; Rainville, Stéphane

    2016-07-11

    The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) 'segregated' - symmetric blobs were of one color, random blobs of the other color(s); (2) 'random-segregated' - as above but with the symmetric color randomly selected on each trial; (3) 'non-segregated' - symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) 'anti-symmetric' - symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective.

  17. The role of color and attention-to-color in mirror-symmetry perception

    PubMed Central

    Gheorghiu, Elena; Kingdom, Frederick A. A.; Remkes, Aaron; Li, Hyung-Chul O.; Rainville, Stéphane

    2016-01-01

    The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) ‘segregated’ – symmetric blobs were of one color, random blobs of the other color(s); (2) ‘random-segregated’ – as above but with the symmetric color randomly selected on each trial; (3) ‘non-segregated’ – symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) ‘anti-symmetric’ – symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective. PMID:27404804

  18. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors

    PubMed Central

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-01-01

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382

  19. Magnetic shuffling of coronal downdrafts

    NASA Astrophysics Data System (ADS)

    Petralia, A.; Reale, F.; Orlando, S.

    2017-02-01

    Context. Channelled fragmented downflows are ubiquitous in magnetized atmospheres, and have recently been addressed based on an observation after a solar eruption. Aims: We study the possible back-effect of the magnetic field on the propagation of confined flows. Methods: We compared two 3D magnetohydrodynamic simulations of dense supersonic plasma blobs that fall down along a coronal magnetic flux tube. In one, the blobs move strictly along the field lines; in the other, the initial velocity of the blobs is not perfectly aligned with the magnetic field and the field is weaker. Results: The aligned blobs remain compact while flowing along the tube, with the generated shocks. The misaligned blobs are disrupted and merge through the chaotic shuffling of the field lines. They are structured into thinner filaments. Alfvén wave fronts are generated together with shocks ahead of the dense moving front. Conclusions: Downflowing plasma fragments can be chaotically and efficiently mixed if their motion is misaligned with field lines, with broad implications for disk accretion in protostars, coronal eruptions, and rain, for example. Movies associated to Figs. 2 and 3 are available at http://www.aanda.org

  20. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD).

    PubMed

    Suzuki, Kenji

    2009-09-21

    Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved.

  1. Choice of threshold line angle for binary phase-only filters

    NASA Astrophysics Data System (ADS)

    Vijaya Kumar, Bhagavatula; Hendrix, Charles D.

    1993-10-01

    The choice of threshold line angle (TLA) is an important issue in designing Binary Phase-Only Filters (BPOFs). In this paper, we derive expressions that explicitly relate the TLA to correlation peak intensity. We also show some examples that illustrate the effect of choosing the wrong TLA.

  2. Automated multiple target detection and tracking in UAV videos

    NASA Astrophysics Data System (ADS)

    Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie

    2010-04-01

    In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.

  3. Entropy-guided switching trimmed mean deviation-boosted anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Nnolim, Uche A.

    2016-07-01

    An effective anisotropic diffusion (AD) mean filter variant is proposed for filtering of salt-and-pepper impulse noise. The implemented filter is robust to impulse noise ranging from low to high density levels. The algorithm involves a switching scheme in addition to utilizing the unsymmetric trimmed mean/median deviation to filter image noise while greatly preserving image edges, regardless of impulse noise density (ND). It operates with threshold parameters selected manually or adaptively estimated from the image statistics. It is further combined with the partial differential equations (PDE)-based AD for edge preservation at high NDs to enhance the properties of the trimmed mean filter. Based on experimental results, the proposed filter easily and consistently outperforms the median filter and its other variants ranging from simple to complex filter structures, especially the known PDE-based variants. In addition, the switching scheme and threshold calculation enables the filter to avoid smoothing an uncorrupted image, and filtering is activated only when impulse noise is present. Ultimately, the particular properties of the filter make its combination with the AD algorithm a unique and powerful edge-preservation smoothing filter at high-impulse NDs.

  4. Pareidolia in infants.

    PubMed

    Kato, Masaharu; Mugitani, Ryoko

    2015-01-01

    Faces convey primal information for our social life. This information is so primal that we sometimes find faces in non-face objects. Such illusory perception is called pareidolia. In this study, using infants' orientation behavior toward a sound source, we demonstrated that infants also perceive pareidolic faces. An image formed by four blobs and an outline was shown to infants with or without pure tones, and the time they spent looking at each blob was compared. Since the mouth is the unique sound source in a face and the literature has shown that infants older than 6 months already have sound-mouth association, increased looking time towards the bottom blob (pareidolic mouth area) during sound presentation indicated that they illusorily perceive a face in the image. Infants aged 10 and 12 months looked longer at the bottom blob under the upright-image condition, whereas no differences in looking time were observed for any blob under the inverted-image condition. However, 8-month-olds did not show any difference in looking time under both the upright and inverted conditions, suggesting that the perception of pareidolic faces, through sound association, comes to develop at around 8 to 10 months after birth.

  5. Pareidolia in Infants

    PubMed Central

    Kato, Masaharu; Mugitani, Ryoko

    2015-01-01

    Faces convey primal information for our social life. This information is so primal that we sometimes find faces in non-face objects. Such illusory perception is called pareidolia. In this study, using infants’ orientation behavior toward a sound source, we demonstrated that infants also perceive pareidolic faces. An image formed by four blobs and an outline was shown to infants with or without pure tones, and the time they spent looking at each blob was compared. Since the mouth is the unique sound source in a face and the literature has shown that infants older than 6 months already have sound-mouth association, increased looking time towards the bottom blob (pareidolic mouth area) during sound presentation indicated that they illusorily perceive a face in the image. Infants aged 10 and 12 months looked longer at the bottom blob under the upright-image condition, whereas no differences in looking time were observed for any blob under the inverted-image condition. However, 8-month-olds did not show any difference in looking time under both the upright and inverted conditions, suggesting that the perception of pareidolic faces, through sound association, comes to develop at around 8 to 10 months after birth. PMID:25689630

  6. Efficiency and robustness of different bus network designs

    NASA Astrophysics Data System (ADS)

    Pang, John Zhen Fu; Bin Othman, Nasri; Ng, Keng Meng; Monterola, Christopher

    2015-07-01

    We compare the efficiencies and robustness of four transport networks that can be possibly formed as a result of deliberate city planning. The networks are constructed based on their spatial resemblance to the cities of Manhattan (lattice), Sudan (random), Beijing (single-blob) and Greater Cairo (dual-blob). For a given type, a genetic algorithm is employed to obtain an optimized set of the bus routes. We then simulate how commuter travels using Yen's algorithms for k shortest paths on an adjacency matrix. The cost of traveling such as walking between stations is captured by varying the weighted sums of matrices. We also consider the number of transfers a posteriori by looking at the computed shortest paths. With consideration to distances via radius of gyration, redundancies of travel and number of bus transfers, our simulations indicate that random and dual-blob are more efficient than single-blob and lattice networks. Moreover, dual-blob type is least robust when node removals are targeted but is most resilient when node failures are random. The work hopes to guide and provide technical perspectives on how geospatial distribution of a city limits the optimality of transport designs.

  7. Blob Flowers.

    ERIC Educational Resources Information Center

    Canfield, Elaine

    2003-01-01

    Describes an art project called blob flowers in which fifth-grade students created pictures of flowers using watercolor and markers. Explains that the lesson incorporates ideas from art and science. Discusses in detail how the students created their flowers. (CMK)

  8. A masking level difference due to harmonicity.

    PubMed

    Treurniet, W C; Boucher, D R

    2001-01-01

    The role of harmonicity in masking was studied by comparing the effect of harmonic and inharmonic maskers on the masked thresholds of noise probes using a three-alternative, forced-choice method. Harmonic maskers were created by selecting sets of partials from a harmonic series with an 88-Hz fundamental and 45 consecutive partials. Inharmonic maskers differed in that the partial frequencies were perturbed to nearby values that were not integer multiples of the fundamental frequency. Average simultaneous-masked thresholds were as much as 10 dB lower with the harmonic masker than with the inharmonic masker, and this difference was unaffected by masker level. It was reduced or eliminated when the harmonic partials were separated by more than 176 Hz, suggesting that the effect is related to the extent to which the harmonics are resolved by auditory filters. The threshold difference was not observed in a forward-masking experiment. Finally, an across-channel mechanism was implicated when the threshold difference was found between a harmonic masker flanked by harmonic bands and a harmonic masker flanked by inharmonic bands. A model developed to explain the observed difference recognizes that an auditory filter output envelope is modulated when the filter passes two or more sinusoids, and that the modulation rate depends on the differences among the input frequencies. For a harmonic masker, the frequency differences of adjacent partials are identical, and all auditory filters have the same dominant modulation rate. For an inharmonic masker, however, the frequency differences are not constant and the envelope modulation rate varies across filters. The model proposes that a lower variability facilitates detection of a probe-induced change in the variability, thus accounting for the masked threshold difference. The model was supported by significantly improved predictions of observed thresholds when the predictor variables included envelope modulation rate variance measured using simulated auditory filters.

  9. Non-causal spike filtering improves decoding of movement intention for intracortical BCIs

    PubMed Central

    Masse, Nicolas Y.; Jarosiewicz, Beata; Simeral, John D.; Bacher, Daniel; Stavisky, Sergey D.; Cash, Sydney S.; Oakley, Erin M.; Berhanu, Etsub; Eskandar, Emad; Friehs, Gerhard; Hochberg, Leigh R.; Donoghue, John P.

    2014-01-01

    Background Multiple types of neural signals are available for controlling assistive devices through brain-computer interfaces (BCIs). Intracortically-recorded spiking neural signals are attractive for BCIs because they can in principle provide greater fidelity of encoded information compared to electrocorticographic (ECoG) signals and electroencephalograms (EEGs). Recent reports show that the information content of these spiking neural signals can be reliably extracted simply by causally band-pass filtering the recorded extracellular voltage signals and then applying a spike detection threshold, without relying on “sorting” action potentials. New method We show that replacing the causal filter with an equivalent non-causal filter increases the information content extracted from the extracellular spiking signal and improves decoding of intended movement direction. This method can be used for real-time BCI applications by using a 4 ms lag between recording and filtering neural signals. Results Across 18 sessions from two people with tetraplegia enrolled in the BrainGate2 pilot clinical trial, we found that threshold crossing events extracted using this non-causal filtering method were significantly more informative of each participant’s intended cursor kinematics compared to threshold crossing events derived from causally filtered signals. This new method decreased the mean angular error between the intended and decoded cursor direction by 9.7° for participant S3, who was implanted 5.4 years prior to this study, and by 3.5° for participant T2, who was implanted 3 months prior to this study. Conclusions Non-causally filtering neural signals prior to extracting threshold crossing events may be a simple yet effective way to condition intracortically recorded neural activity for direct control of external devices through BCIs. PMID:25128256

  10. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  11. Blobs and drift wave dynamics

    DOE PAGES

    Zhang, Yanzeng; Krasheninnikov, S. I.

    2017-09-29

    The modified Hasegawa-Mima equation retaining all nonlinearities is investigated from the point of view of the formation of blobs. The linear analysis shows that the amplitude of the drift wave packet propagating in the direction of decreasing background plasma density increases and eventually saturates due to nonlinear effects. Nonlinear modification of the time averaged plasma density profile results in the formation of large amplitude modes locked in the radial direction, but still propagating in the poloidal direction, which resembles the experimentally observed chain of blobs propagating in the poloidal direction. Such specific density profiles, causing the locking of drift waves,more » could form naturally at the edge of tokamak due to a neutral ionization source. Thus, locked modes can grow in situ due to plasma instabilities, e.g., caused by finite resistivity. Furthermore, the modulation instability (in the poloidal direction) of these locked modes can result in a blob-like burst of plasma density.« less

  12. Multipole Vortex Blobs (MVB): Symplectic Geometry and Dynamics.

    PubMed

    Holm, Darryl D; Jacobs, Henry O

    2017-01-01

    Vortex blob methods are typically characterized by a regularization length scale, below which the dynamics are trivial for isolated blobs. In this article, we observe that the dynamics need not be trivial if one is willing to consider distributional derivatives of Dirac delta functionals as valid vorticity distributions. More specifically, a new singular vortex theory is presented for regularized Euler fluid equations of ideal incompressible flow in the plane. We determine the conditions under which such regularized Euler fluid equations may admit vorticity singularities which are stronger than delta functions, e.g., derivatives of delta functions. We also describe the symplectic geometry associated with these augmented vortex structures, and we characterize the dynamics as Hamiltonian. Applications to the design of numerical methods similar to vortex blob methods are also discussed. Such findings illuminate the rich dynamics which occur below the regularization length scale and enlighten our perspective on the potential for regularized fluid models to capture multiscale phenomena.

  13. Solar Radio Burst Associated with the Falling Bright EUV Blob

    NASA Astrophysics Data System (ADS)

    Karlický, Marian; Zemanová, Alena; Dudík, Jaroslav; Radziszewski, Krzysztof

    2018-02-01

    At the beginning of the 2015 November 4 flare, in the 1300–2000 MHz frequency range, we observed a very rare slow positively drifting burst. We searched for associated phenomena in simultaneous EUV observations made by IRIS, SDO/AIA, and Hinode/XRT, as well as in H α observations. We found that this radio burst was accompanied with the bright blob, visible at transition region, coronal, and flare temperatures, falling down to the chromosphere along the dark loop with a velocity of about 280 km s‑1. The dark loop was visible in H α but disappeared afterward. Furthermore, we found that the falling blob interacted with the chromosphere as expressed by a sudden change of the H α spectra at the location of this interaction. Considering different possibilities, we propose that the observed slow positively drifting burst is generated by the thermal conduction front formed in front of the falling hot EUV blob.

  14. A model of fast radio bursts: collisions between episodic magnetic blobs

    NASA Astrophysics Data System (ADS)

    Li, Long-Biao; Huang, Yong-Feng; Geng, Jin-Jun; Li, Bing

    2018-06-01

    Fast radio bursts (FRBs) are bright radio pulses from the sky with millisecond durations and Jansky-level flux densities. Their origins are still largely uncertain. Here we suggest a new model for FRBs. We argue that the collision of a white dwarf with a black hole can generate a transient accretion disk, from which powerful episodicmagnetic blobs will be launched. The collision between two consecutive magnetic blobs can result in a catastrophic magnetic reconnection, which releases a large amount of free magnetic energy and forms a forward shock. The shock propagates through the cold magnetized plasma within the blob in the collision region, radiating through the synchrotron maser mechanism, which is responsible for a non-repeating FRB signal. Our calculations show that the theoretical energetics, radiation frequency, duration timescale and event rate can be very consistent with the observational characteristics of FRBs.

  15. Pulsed Flows Along a Cusp Structure Observed with SOO/AIA

    NASA Technical Reports Server (NTRS)

    Thompson, Barbara; Demoulin, P.; Mandrini, C. H.; Mays, M. L.; Ofman, L.; Driel-Gesztelyi, L. Van; Viall, N. M.

    2011-01-01

    We present observations of a cusp-shaped structure that formed after a flare and coronal mass ejection on 14 February 2011. Throughout the evolution of the cusp structure, blob features up to a few Mm in size were observed flowing along the legs and stalk of the cusp at projected speeds ranging from 50 to 150 km/sec. Around two dozen blob features, on order of 1 - 3 minutes apart, were tracked in multiple AlA EUV wavelengths. The blobs flowed outward (away from the Sun) along the cusp stalk, and most of the observed speeds were either constant or decelerating. We attempt to reconstruct the 3-D magnetic field of the evolving structure, discuss the possible drivers of the flows (including pulsed reconnect ion and tearing mode instability), and compare the observations to studies of pulsed reconnect ion and blob flows in the solar wind and the Earth's magnetosphere.

  16. Scrape-off layer tokamak plasma turbulence

    NASA Astrophysics Data System (ADS)

    Bisai, N.; Singh, R.; Kaw, P. K.

    2012-05-01

    Two-dimensional (2D) interchange turbulence in the scrape-off layer of tokamak plasmas and their subsequent contribution to anomalous plasma transport has been studied in recent years using electron continuity, current balance, and electron energy equations. In this paper, numerically it is demonstrated that the inclusion of ion energy equation in the simulation changes the nature of plasma turbulence. Finite ion temperature reduces floating potential by about 15% compared with the cold ion temperature approximation and also reduces the radial electric field. Rotation of plasma blobs at an angular velocity about 1.5×105 rad/s has been observed. It is found that blob rotation keeps plasma blob charge separation at an angular position with respect to the vertical direction that gives a generation of radial electric field. Plasma blobs with high electron temperature gradients can align the charge separation almost in the radial direction. Influence of high ion temperature and its gradient has been presented.

  17. Psychophysical Measurements of Luminance Contrast Sensitivity and Color Discrimination with Transparent and Blue-Light Filter Intraocular Lenses.

    PubMed

    da Costa, Marcelo Fernandes; Júnior, Augusto Paranhos; Lottenberg, Claudio Luiz; Castro, Leonardo Cunha; Ventura, Dora Fix

    2017-12-01

    The purpose of this study was to measure luminance contrast sensitivity and color vision thresholdfs in normal subjects using a blue light filter lens and transparent intraocular lens material. Monocular luminance grating contrast sensitivity was measured with Psycho for Windows (version 2.36; Cambridge Research Systems) at 3.0, 6.0, 12.0, 20.0, and 30.0 cycles per degree of visual angle (cpd) in 15 normal subjects (eight female), with a mean age of 21.6 years (SD = 3.8 years). Chromatic discrimination was assessed with the Cambridge colour test (CCT) along the protan, deutan, and tritan color confusion axes. Both tests were performed in a darkened room under two situations: with a transparent lens and with blue light filter lens. Subjective impressions were taken by subjects regarding their visual experience under both conditions. No difference was found between the luminance contrast sensitivity measured with transparent and blue light filter. However, 13/15 (87%) of the subjects reported more comfortable vision with the blue filter. In the color vision test, tritan thresholds were significantly higher for the blue filter compared with the transparent filter (p = 0.003). For protan and deutan thresholds no differences were found. Blue-yellow color vision is impaired with the blue light filter, and no impairment occurs with the transparent filter. No significant differences in thresholds were found in the luminance contrast sensitivity comparing the blue light and transparent filters. The impact of short wavelength light filtering on intrinsically photosensitive retinal ganglion cells is also discussed.

  18. Visual Object Recognition and Tracking of Tools

    NASA Technical Reports Server (NTRS)

    English, James; Chang, Chu-Yin; Tardella, Neil

    2011-01-01

    A method has been created to automatically build an algorithm off-line, using computer-aided design (CAD) models, and to apply this at runtime. The object type is discriminated, and the position and orientation are identified. This system can work with a single image and can provide improved performance using multiple images provided from videos. The spatial processing unit uses three stages: (1) segmentation; (2) initial type, pose, and geometry (ITPG) estimation; and (3) refined type, pose, and geometry (RTPG) calculation. The image segmentation module files all the tools in an image and isolates them from the background. For this, the system uses edge-detection and thresholding to find the pixels that are part of a tool. After the pixels are identified, nearby pixels are grouped into blobs. These blobs represent the potential tools in the image and are the product of the segmentation algorithm. The second module uses matched filtering (or template matching). This approach is used for condensing synthetic images using an image subspace that captures key information. Three degrees of orientation, three degrees of position, and any number of degrees of freedom in geometry change are included. To do this, a template-matching framework is applied. This framework uses an off-line system for calculating template images, measurement images, and the measurements of the template images. These results are used online to match segmented tools against the templates. The final module is the RTPG processor. Its role is to find the exact states of the tools given initial conditions provided by the ITPG module. The requirement that the initial conditions exist allows this module to make use of a local search (whereas the ITPG module had global scope). To perform the local search, 3D model matching is used, where a synthetic image of the object is created and compared to the sensed data. The availability of low-cost PC graphics hardware allows rapid creation of synthetic images. In this approach, a function of orientation, distance, and articulation is defined as a metric on the difference between the captured image and a synthetic image with an object in the given orientation, distance, and articulation. The synthetic image is created using a model that is looked up in an object-model database. A composable software architecture is used for implementation. Video is first preprocessed to remove sensor anomalies (like dead pixels), and then is processed sequentially by a prioritized list of tracker-identifiers.

  19. Estimation of the center frequency of the highest modulation filter.

    PubMed

    Moore, Brian C J; Füllgrabe, Christian; Sek, Aleksander

    2009-02-01

    For high-frequency sinusoidal carriers, the threshold for detecting sinusoidal amplitude modulation increases when the signal modulation frequency increases above about 120 Hz. Using the concept of a modulation filter bank, this effect might be explained by (1) a decreasing sensitivity or greater internal noise for modulation filters with center frequencies above 120 Hz; and (2) a limited span of center frequencies of the modulation filters, the top filter being tuned to about 120 Hz. The second possibility was tested by measuring modulation masking in forward masking using an 8 kHz sinusoidal carrier. The signal modulation frequency was 80, 120, or 180 Hz and the masker modulation frequencies covered a range above and below each signal frequency. Four highly trained listeners were tested. For the 80-Hz signal, the signal threshold was usually maximal when the masker frequency equaled the signal frequency. For the 180-Hz signal, the signal threshold was maximal when the masker frequency was below the signal frequency. For the 120-Hz signal, two listeners showed the former pattern, and two showed the latter pattern. The results support the idea that the highest modulation filter has a center frequency in the range 100-120 Hz.

  20. Size, shape, and diffusivity of a single Debye-Hückel polyelectrolyte chain in solution.

    PubMed

    Soysa, W Chamath; Dünweg, B; Prakash, J Ravi

    2015-08-14

    Brownian dynamics simulations of a coarse-grained bead-spring chain model, with Debye-Hückel electrostatic interactions between the beads, are used to determine the root-mean-square end-to-end vector, the radius of gyration, and various shape functions (defined in terms of eigenvalues of the radius of gyration tensor) of a weakly charged polyelectrolyte chain in solution, in the limit of low polymer concentration. The long-time diffusivity is calculated from the mean square displacement of the centre of mass of the chain, with hydrodynamic interactions taken into account through the incorporation of the Rotne-Prager-Yamakawa tensor. Simulation results are interpreted in the light of the Odjik, Skolnick, Fixman, Khokhlov, and Khachaturian blob scaling theory (Everaers et al., Eur. Phys. J. E 8, 3 (2002)) which predicts that all solution properties are determined by just two scaling variables-the number of electrostatic blobs X and the reduced Debye screening length, Y. We identify three broad regimes, the ideal chain regime at small values of Y, the blob-pole regime at large values of Y, and the crossover regime at intermediate values of Y, within which the mean size, shape, and diffusivity exhibit characteristic behaviours. In particular, when simulation results are recast in terms of blob scaling variables, universal behaviour independent of the choice of bead-spring chain parameters, and the number of blobs X, is observed in the ideal chain regime and in much of the crossover regime, while the existence of logarithmic corrections to scaling in the blob-pole regime leads to non-universal behaviour.

  1. Foveated model observers to predict human performance in 3D images

    NASA Astrophysics Data System (ADS)

    Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.

    2017-03-01

    We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.

  2. Observational Evidence for the Associated Formation of Blobs and Raining Inflows in the Solar Corona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Diaz, E.; Rouillard, A. P.; Lavraud, B.

    The origin of the slow solar wind is still a topic of much debate. The continual emergence of small transient structures from helmet streamers is thought to constitute one of the main sources of the slow wind. Determining the height at which these transients are released is an important factor in determining the conditions under which the slow solar wind forms. To this end, we have carried out a multipoint analysis of small transient structures released from a north–south tilted helmet streamer into the slow solar wind over a broad range of position angles during Carrington Rotation 2137. Combining themore » remote-sensing observations taken by the Solar-TErrestrial RElations Observatory ( STEREO ) mission with coronagraphic observations from the SOlar and Heliospheric Observatory ( SOHO ) spacecraft, we show that the release of such small transient structures (often called blobs), which subsequently move away from the Sun, is associated with the concomitant formation of transient structures collapsing back toward the Sun; the latter have been referred to by previous authors as “raining inflows.” This is the first direct association between outflowing blobs and raining inflows, which locates the formation of blobs above the helmet streamers and gives strong support that the blobs are released by magnetic reconnection.« less

  3. Gyrokinetic simulation of edge blobs and divertor heat-load footprint

    NASA Astrophysics Data System (ADS)

    Chang, C. S.; Ku, S.; Hager, R.; Churchill, M.; D'Azevedo, E.; Worley, P.

    2015-11-01

    Gyrokinetic study of divertor heat-load width Lq has been performed using the edge gyrokinetic code XGC1. Both neoclassical and electrostatic turbulence physics are self-consistently included in the simulation with fully nonlinear Fokker-Planck collision operation and neutral recycling. Gyrokinetic ions and drift kinetic electrons constitute the plasma in realistic magnetic separatrix geometry. The electron density fluctuations from nonlinear turbulence form blobs, as similarly seen in the experiments. DIII-D and NSTX geometries have been used to represent today's conventional and tight aspect ratio tokamaks. XGC1 shows that the ion neoclassical orbit dynamics dominates over the blob physics in setting Lq in the sample DIII-D and NSTX plasmas, re-discovering the experimentally observed 1/Ip type scaling. Magnitude of Lq is in the right ballpark, too, in comparison with experimental data. However, in an ITER standard plasma, XGC1 shows that the negligible neoclassical orbit excursion effect makes the blob dynamics to dominate Lq. Differently from Lq 1mm (when mapped back to outboard midplane) as was predicted by simple-minded extrapolation from the present-day data, XGC1 shows that Lq in ITER is about 1 cm that is somewhat smaller than the average blob size. Supported by US DOE and the INCITE program.

  4. Local Circuits of V1 Layer 4B Neurons Projecting to V2 Thick Stripes Define Distinct Cell Classes and Avoid Cytochrome Oxidase Blobs

    PubMed Central

    Yarch, Jeff; Federer, Frederick

    2017-01-01

    Decades of anatomical studies on the primate primary visual cortex (V1) have led to a detailed diagram of V1 intrinsic circuitry, but this diagram lacks information about the output targets of V1 cells. Understanding how V1 local processing relates to downstream processing requires identification of neuronal populations defined by their output targets. In primates, V1 layers (L)2/3 and 4B send segregated projections to distinct cytochrome oxidase (CO) stripes in area V2: neurons in CO blob columns project to thin stripes while neurons outside blob columns project to thick and pale stripes, suggesting functional specialization of V1-to-V2 CO streams. However, the conventional diagram of V1 shows all L4B neurons, regardless of their soma location in blob or interblob columns, as projecting selectively to CO blobs in L2/3, suggesting convergence of blob/interblob information in L2/3 blobs and, possibly, some V2 stripes. However, it is unclear whether all L4B projection neurons show similar local circuitries. Using viral-mediated circuit tracing, we have identified the local circuits of L4B neurons projecting to V2 thick stripes in macaque. Consistent with previous studies, we found the somata of this L4B subpopulation to reside predominantly outside blob columns; however, unlike previous descriptions of local L4B circuits, these cells consistently projected outside CO blob columns in all layers. Thus, the local circuits of these L4B output neurons, just like their extrinsic projections to V2, preserve CO streams. Moreover, the intra-V1 laminar patterns of axonal projections identify two distinct neuron classes within this L4B subpopulation, including a rare novel neuron type, suggestive of two functionally specialized output channels. SIGNIFICANCE STATEMENT Conventional diagrams of primate primary visual cortex (V1) depict neuronal connections within and between different V1 layers, but lack information about the cells' downstream targets. This information is critical to understanding how local processing in V1 relates to downstream processing. We have identified the local circuits of a population of cells in V1 layer (L)4B that project to area V2. These cells' local circuits differ from classical descriptions of L4B circuits in both the laminar and functional compartments targeted by their axons, and identify two neuron classes. Our results demonstrate that both local intra-V1 and extrinsic V1-to-V2 connections of L4B neurons preserve CO-stream segregation, suggesting that across-stream integration occurs downstream of V1, and that output targets dictate local V1 circuitry. PMID:28077720

  5. Motion streaks in fast motion rivalry cause orientation-selective suppression.

    PubMed

    Apthorp, Deborah; Wenderoth, Peter; Alais, David

    2009-05-14

    We studied binocular rivalry between orthogonally translating arrays of random Gaussian blobs and measured the strength of rivalry suppression for static oriented probes. Suppression depth was quantified by expressing monocular probe thresholds during dominance relative to thresholds during suppression. Rivalry between two fast motions or two slow motions was compared in order to test the suggestion that fast-moving objects leave oriented "motion streaks" due to temporal integration (W. S. Geisler, 1999). If fast motions do produce motion streaks, then fast motion rivalry might also entail rivalry between the orthogonal streak orientations. We tested this using a static oriented probe that was aligned either parallel to the motion trajectory (hence collinear with the "streaks") or was orthogonal to the trajectory, predicting that rivalry suppression would be greater for parallel probes, and only for rivalry between fast motions. Results confirmed that suppression depth did depend on probe orientation for fast motion but not for slow motion. Further experiments showed that threshold elevations for the oriented probe during suppression exhibited clear orientation tuning. However, orientation-tuned elevations were also present during dominance, suggesting within-channel masking as the basis of the extra-deep suppression. In sum, the presence of orientation-dependent suppression in fast motion rivalry is consistent with the "motion streaks" hypothesis.

  6. Fast detection of the main anatomical structures in digital retinal images based on intra- and inter-structure relational knowledge.

    PubMed

    Molina-Casado, José M; Carmona, Enrique J; García-Feijoó, Julián

    2017-10-01

    The anatomical structure detection in retinal images is an open problem. However, most of the works in the related literature are oriented to the detection of each structure individually or assume the previous detection of a structure which is used as a reference. The objective of this paper is to obtain simultaneous detection of the main retinal structures (optic disc, macula, network of vessels and vascular bundle) in a fast and robust way. We propose a new methodology oriented to accomplish the mentioned objective. It consists of two stages. In an initial stage, a set of operators is applied to the retinal image. Each operator uses intra-structure relational knowledge in order to produce a set of candidate blobs that belongs to the desired structure. In a second stage, a set of tuples is created, each of which contains a different combination of the candidate blobs. Next, filtering operators, using inter-structure relational knowledge, are used in order to find the winner tuple. A method using template matching and mathematical morphology is implemented following the proposed methodology. A success is achieved if the distance between the automatically detected blob center and the actual structure center is less than or equal to one optic disc radius. The success rates obtained in the different public databases analyzed were: MESSIDOR (99.33%, 98.58%, 97.92%), DIARETDB1 (96.63%, 100%, 97.75%), DRIONS (100%, n/a, 100%) and ONHSD (100%, 98.85%, 97.70%) for optic disc (OD), macula (M) and vascular bundle (VB), respectively. Finally, the overall success rate obtained in this study for each structure was: 99.26% (OD), 98.69% (M) and 98.95% (VB). The average time of processing per image was 4.16 ± 0.72 s. The main advantage of the use of inter-structure relational knowledge was the reduction of the number of false positives in the detection process. The implemented method is able to simultaneously detect four structures. It is fast, robust and its detection results are competitive in relation to other methods of the recent literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Detection of longitudinal ulcer using roughness value for computer aided diagnosis of Crohn's disease

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku

    2011-03-01

    The purpose of this paper is to present a new method to detect ulcers, which is one of the symptoms of Crohn's disease, from CT images. Crohn's disease is an inflammatory disease of the digestive tract. Crohn's disease commonly affects the small intestine. An optical or a capsule endoscope is used for small intestine examinations. However, these endoscopes cannot pass through intestinal stenosis parts in some cases. A CT image based diagnosis allows a physician to observe whole intestine even if intestinal stenosis exists. However, because of the complicated shape of the small and large intestines, understanding of shapes of the intestines and lesion positions are difficult in the CT image based diagnosis. Computer-aided diagnosis system for Crohn's disease having automated lesion detection is required for efficient diagnosis. We propose an automated method to detect ulcers from CT images. Longitudinal ulcers make rough surface of the small and large intestinal wall. The rough surface consists of combination of convex and concave parts on the intestinal wall. We detect convex and concave parts on the intestinal wall by a blob and an inverse-blob structure enhancement filters. A lot of convex and concave parts concentrate on roughed parts. We introduce a roughness value to differentiate convex and concave parts concentrated on the roughed parts from the other on the intestinal wall. The roughness value effectively reduces false positives of ulcer detection. Experimental results showed that the proposed method can detect convex and concave parts on the ulcers.

  8. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  9. Gradual Streamer Expansions and the Relationship between Blobs and Inflows

    NASA Astrophysics Data System (ADS)

    Wang, Y.-M.; Hess, P.

    2018-06-01

    Coronal helmet streamers show a continual tendency to expand outward and pinch off, giving rise to flux ropes that are observed in white light as “blobs” propagating outward along the heliospheric current/plasma sheet. The blobs form within the r ∼ 2–6 R ⊙ heliocentric range of the Large Angle and Spectrometric Coronagraph (LASCO) C2 instrument, but the expected inward-moving counterparts are often not detected. Here we show that the height of blob formation varies as a function of the underlying photospheric field, with the helmet streamer loops expanding to greater heights when active regions (ARs) emerge underneath them. When the pinch-offs occur at r ∼ 3–4 R ⊙, diverging inward/outward tracks sometimes appear in height–time maps constructed from LASCO C2 running-difference images. When the underlying photospheric field is weak, the blobs form closer to the inner edge of the C2 field of view and only the outward tracks are clearly visible. Conversely, when the emergence of large ARs leads to a strengthening of the outer coronal field and an increase in the total white-light radiance (as during late 2014), the expanding helmet-streamer loops pinch off beyond r ∼ 4 R ⊙, triggering strong inflow streams whose outgoing counterparts are usually very faint. We deduce that the visibility of the blobs and inflows depends on the amount of material that the diverging components sweep up within the 2–6 R ⊙ field of view. We also note that the rate of blob production tends to increase when a helmet streamer is “activated” by underlying flux emergence.

  10. Hot gas, cold gas and sub-haloes in a Lyman α blob at redshift 2.38

    NASA Astrophysics Data System (ADS)

    Francis, Paul. J.; Dopita, Michael A.; Colbert, James W.; Palunas, Povilas; Scarlata, Claudia; Teplitz, Harry; Williger, Gerard M.; Woodgate, Bruce E.

    2013-01-01

    We present integral field spectroscopy of a Lyman α blob at redshift 2.38, with a spectral resolution three times better than previous published work. As with previous observations, the blob has a chaotic velocity structure, much of which breaks up into multiple components. Our spectroscopy shows, however, that some of these multiple components are extremely narrow: they have velocity widths of less than 100 km s- 1. Combining these new data with previous observations, we argue that this Lyman α blob resides in a dark matter halo of around 1013 M⊙. At the centre of this halo are two compact red massive galaxies. They are surrounded by hot gas, probably a superwind from merger-induced nuclear starbursts. This hot gas has shut down star formation in the non-nuclear region of these galaxies, leading to their red-and-dead colours. A filament or lump of infalling cold gas is colliding with the hot gas phase and being shocked to high temperatures, while still around 30 kpc from the red galaxies. The shock region is self-absorbed in Lyman α but produces C iv emission. Further out still, the cold gas in a number of sub-haloes is being lit up, most likely by a combination of tidally triggered star formation, bow shocks as they plough through the hot halo medium, resonant scattering of Lyman α from the filament collision and tidal stripping of gas which enhances the Lyman α escape fraction. The observed Lyman α emission from the blob is dominated by the sum of the emission from these sub-haloes. On statistical grounds, we argue that Lyman α blobs are not greatly elongated in shape and that most are not powered by ionization or scattering from a central active galactic nucleus or starburst.

  11. Blob Formation and Ejection in Coronal Jets due to the Plasmoid and Kelvin–Helmholtz Instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ni, Lei; Lin, Jun; Zhang, Qing-Min

    2017-05-20

    We perform 2D resistive magnetohydrodynamic simulations of coronal jets driven by flux emergence along the lower boundary. The reconnection layers are susceptible to the formation of blobs that are ejected in the jet. Our simulation with low plasma β (Case I) shows that magnetic islands form easily and propagate upward in the jet. These islands are multithermal and thus are predicted to show up in hot channels (335 Å and 211 Å) and the cool channel (304 Å) in observations by the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory . The islands have maximum temperatures of 8 MK,more » lifetimes of 120 s, diameters of 6 Mm, and velocities of 200 km s{sup −1}. These parameters are similar to the properties of blobs observed in extreme-ultraviolet (EUV) jets by AIA. The Kelvin–Helmholtz instability develops in our simulation with moderately high plasma β (Case II) and leads to the formation of bright vortex-like blobs above the multiple high magnetosonic Mach number regions that appear along the jet. These vortex-like blobs can also be identified in the AIA channels. However, they eventually move downward and disappear after the high magnetosonic Mach number regions disappear. In the lower plasma β case, the lifetime for the jet is shorter, the jet and magnetic islands are formed with higher velocities and temperatures, the current-sheet fragments are more chaotic, and more magnetic islands are generated. Our results show that the plasmoid instability and Kelvin–Helmholtz instability along the jet are both possible causes of the formation of blobs observed at EUV wavelengths.« less

  12. Minimum Energy-Variance Filters for the detection of compact sources in crowded astronomical images

    NASA Astrophysics Data System (ADS)

    Herranz, D.; Sanz, J. L.; López-Caniego, M.; González-Nuevo, J.

    2006-10-01

    In this paper we address the common problem of the detection and identification of compact sources, such as stars or far galaxies, in Astronomical images. The common approach, that consist in applying a matched filter to the data in order to remove noise and to search for intensity peaks above a certain detection threshold, does not work well when the sources to be detected appear in large number over small regions of the sky due to the effect of source overlapping and interferences among the filtered profiles of the sources. A new class of filter that balances noise removal with signal spatial concentration is introduced, then it is applied to simulated astronomical images of the sky at 857 GHz. We show that with the new filter it is possible to improve the ratio between true detections and false alarms with respect to the matched filter. For low detection thresholds, the improvement is ~ 40%.

  13. Optimization of a matched-filter receiver for frequency hopping code acquisition in jamming

    NASA Astrophysics Data System (ADS)

    Pawlowski, P. R.; Polydoros, A.

    A matched-filter receiver for frequency hopping (FH) code acquisition is optimized when either partial-band tone jamming or partial-band Gaussian noise jamming is present. The receiver is matched to a segment of the FH code sequence, sums hard per-channel decisions to form a test, and uses multiple tests to verify acquisition. The length of the matched filter and the number of verification tests are fixed. Optimization is then choosing thresholds to maximize performance based upon the receiver's degree of knowledge about the jammer ('side-information'). Four levels of side-information are considered, ranging from none to complete. The latter level results in a constant-false-alarm-rate (CFAR) design. At each level, performance sensitivity to threshold choice is analyzed. Robust thresholds are chosen to maximize performance as the jammer varies its power distribution, resulting in simple design rules which aid threshold selection. Performance results, which show that optimum distributions for the jammer power over the total FH bandwidth exist, are presented.

  14. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  15. Attenuating Photostress and Glare Disability in Pseudophakic Patients through the Addition of a Short-Wave Absorbing Filter.

    PubMed

    Hammond, Billy R

    2015-01-01

    To evaluate the effects of filtering short wavelength light on visual performance under intense light conditions among pseudophakic patients previously implanted with a clear intraocular lens (IOL). This was a patient-masked, randomized crossover study conducted at 6 clinical sites in the United States between September 2013 and January 2014. One hundred fifty-four bilaterally pseudophakic patients were recruited. Photostress recovery time and glare disability thresholds were measured with clip-on blue-light-filtering and placebo (clear; no blue-light filtration) glasses worn over patients' habitual correction. Photostress recovery time was quantified as the time necessary to regain sight of a grating target after intense light exposure. Glare disability threshold was assessed as the intensity of a white-light annulus necessary to obscure a central target. The order of filter used and test eye were randomized across patients. Photostress recovery time and glare disability thresholds were significantly improved (both P < 0.0001) when patients used blue-light-filtering glasses compared with clear, nonfiltering glasses. Compared with a nonfiltering placebo, adding a clip-on blue-absorbing filter to the glasses of pseudophakic patients implanted with clear IOLs significantly increased their ability to cope with glare and to recover normal viewing after an intensive photostress. This result implies that IOL designs with blue-light-filtering characteristics may be beneficial under intense light conditions.

  16. Towards real-time detection and tracking of spatio-temporal features: Blob-filaments in fusion plasma

    DOE PAGES

    Wu, Lingfei; Wu, Kesheng; Sim, Alex; ...

    2016-06-01

    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes tomore » detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.« less

  17. SOL Thermal Instability due to Radial Blob Convection

    NASA Astrophysics Data System (ADS)

    D'Ippolito, D. A.

    2005-10-01

    C-Mod datafootnotetextM. Greenwald, Plasma Phys. Contr. Fusion 44, R27 (2002). suggests a density limit when rapid perpendicular convection dominates SOL heat transport. This is supported by a recent analysisfootnotetextD.A. Russell et al., Phys. Rev. Lett. 93, 265001 (2004). of BOUT code turbulence simulations, which shows that rapid outwards convection of plasma by turbulent blobs is enhanced when the X-point collisionality is large, resulting in a synergistic effect between blob convection and X-point cooling. This work motivates the present analysis of SOL thermal equilibrium and instability including an RX-regime modelfootnotetextJ.R. Myra and D.A. D'Ippolito, Lodestar Report LRC-05-105 (2005). of blob particle and heat transport. Two-point (midplane, X-point) SOL thermal equilibrium and stability models are considered including both two-field (T) and four-field (n,T) treatments. The conditions under which loss of thermal equilibrium or thermal instabilities occur are established, and relations to the C-Mod data are described.

  18. Changes to the hydrography and zooplankton in the northern California Current in response to `the Blob'of 2014-2015

    NASA Astrophysics Data System (ADS)

    Peterson, W. T.

    2016-02-01

    Fortnightly measurements of hydrography and zooplankton species composition have been sustained along the Newport Hydrographic line since 1996. From this 20 year time series we have established that zooplankton abundance and species composition closely tracks the phase of the Pacific Decadal Oscillation and the El Nino Southern Oscillation. During positive (warm) phase of the PDO, a warm water `southern' subtropical coastal community is found whereas during negative (cold) phase a cold water `northern'coastal community dominates. The Blob though was a rule-changer. The Blob began to move slowly ashore at Newport on 14 September 2014 with the seasonal relaxation of upwelling, and within 5 h SST increased 6°C to 19.4°C. On the 25 and 30 September cruises, copepod species richness increased as well, with an anomaly of 2 and 9 species respectively, greater than the 20 year climatology for September. We continued to monitor the plankton throughout the autumn 2014 and winter, spring and summer 2015 and found a total of seventeen copepod species that were either new to Oregon or have occurred only rarely in the past. Many of these species are oceanic with sub-tropical or tropical affinities thus are indicators of tropical waters, suggesting that the Blob water which came ashore in central Oregon had its origins offshore rather than from coastal waters to the south. Some of the copepod species that were new or rarely seen included Subeucalanus crassus, Eucalanus hyalinus, Mecynocera clausi, Calocalanus pavo, Centropages bradyii, and Pleuromamma borealis and P. xiphias. Krill biomass was the lowest in our 20 year time series. The southern California Current neritic krill species Nyctiphanes simplex appears off Oregon during major El Niño events (1983, 1998), but none were seen during The Blob event which again suggests that the origin of the Blob water which appeared off Oregon was from far offshore, not from coastal waters to the south. Note in the figure below that species richness during the Blob period was greater than that observed during the 1997-98 El Nino and lesser El Nino events in 2003-2005 and 2009-10.

  19. Water ring-bouncing on repellent singularities.

    PubMed

    Chantelot, Pierre; Mazloomi Moqaddam, Ali; Gauthier, Anaïs; Chikatamarla, Shyam S; Clanet, Christophe; Karlin, Ilya V; Quéré, David

    2018-03-28

    Texturing a flat superhydrophobic substrate with point-like superhydrophobic macrotextures of the same repellency makes impacting water droplets take off as rings, which leads to shorter bouncing times than on a flat substrate. We investigate the contact time reduction on such elementary macrotextures through experiment and simulations. We understand the observations by decomposing the impacting drop reshaped by the defect into sub-units (or blobs) whose size is fixed by the liquid ring width. We test the blob picture by looking at the reduction of contact time for off-centered impacts and for impacts in grooves that produce liquid ribbons where the blob size is fixed by the width of the channel.

  20. Change Detection via Selective Guided Contrasting Filters

    NASA Astrophysics Data System (ADS)

    Vizilter, Y. V.; Rubis, A. Y.; Zheltov, S. Y.

    2017-05-01

    Change detection scheme based on guided contrasting was previously proposed. Guided contrasting filter takes two images (test and sample) as input and forms the output as filtered version of test image. Such filter preserves the similar details and smooths the non-similar details of test image with respect to sample image. Due to this the difference between test image and its filtered version (difference map) could be a basis for robust change detection. Guided contrasting is performed in two steps: at the first step some smoothing operator (SO) is applied for elimination of test image details; at the second step all matched details are restored with local contrast proportional to the value of some local similarity coefficient (LSC). The guided contrasting filter was proposed based on local average smoothing as SO and local linear correlation as LSC. In this paper we propose and implement new set of selective guided contrasting filters based on different combinations of various SO and thresholded LSC. Linear average and Gaussian smoothing, nonlinear median filtering, morphological opening and closing are considered as SO. Local linear correlation coefficient, morphological correlation coefficient (MCC), mutual information, mean square MCC and geometrical correlation coefficients are applied as LSC. Thresholding of LSC allows operating with non-normalized LSC and enhancing the selective properties of guided contrasting filters: details are either totally recovered or not recovered at all after the smoothing. These different guided contrasting filters are tested as a part of previously proposed change detection pipeline, which contains following stages: guided contrasting filtering on image pyramid, calculation of difference map, binarization, extraction of change proposals and testing change proposals using local MCC. Experiments on real and simulated image bases demonstrate the applicability of all proposed selective guided contrasting filters. All implemented filters provide the robustness relative to weak geometrical discrepancy of compared images. Selective guided contrasting based on morphological opening/closing and thresholded morphological correlation demonstrates the best change detection result.

  1. Gas kinematics of Lyman Alpha Blobs at z=2-3

    NASA Astrophysics Data System (ADS)

    Yang, Yujin

    2015-08-01

    High-redshift Lyman alpha nebulae (Ly-alpha "blobs", LABs) are the site of massive galaxy formation and their early interaction with the intergalactic medium. Research in the past decade has struggled to make progress on the question of what powers these huge Ly-alpha halos and whether the Ly-alpha-emitting gas is outflowing or infalling. First, I will present our optical and NIR spectroscopic observations for the Ly-alpha and the redshifted nebular emission lines such as [OII], [OIII] and Halpha. Using three independent measures --- the velocity offset between the Ly-alpha line and the nonresonant [O III] or Halpha line, the offset of stacked interstellar metal absorption lines, and the spectrally resolved [O III] line profile --- we study the kinematics of gas along the line of sight to galaxies within each blob center. All these kinematic measures show that there are only weak outflows, therefore excluding gas inflows and extreme hyper/superwinds as a source of the extended Ly-alpha emission. I will also present the first detection of molecular gas from a Ly-alpha blob and our on-going effort to characterize the physical conditions of its ISM. The large velocity gradient (LVG) modeling using PdBI observations of CO(3-2), CO(5-4), CO(7-6), CI(2-1) lines suggests that two-phase medium is required to explain the blob's CO SEDs and dust continuum.

  2. Reprint of “Non-causal spike filtering improves decoding of movement intention for intracortical BCIs”☆

    PubMed Central

    Masse, Nicolas Y.; Jarosiewicz, Beata; Simeral, John D.; Bacher, Daniel; Stavisky, Sergey D.; Cash, Sydney S.; Oakley, Erin M.; Berhanu, Etsub; Eskandar, Emad; Friehs, Gerhard; Hochberg, Leigh R.; Donoghue, John P.

    2015-01-01

    Background Multiple types of neural signals are available for controlling assistive devices through brain–computer interfaces (BCIs). Intracortically recorded spiking neural signals are attractive for BCIs because they can in principle provide greater fidelity of encoded information compared to electrocorticographic (ECoG) signals and electroencephalograms (EEGs). Recent reports show that the information content of these spiking neural signals can be reliably extracted simply by causally band-pass filtering the recorded extracellular voltage signals and then applying a spike detection threshold, without relying on “sorting” action potentials. New method We show that replacing the causal filter with an equivalent non-causal filter increases the information content extracted from the extracellular spiking signal and improves decoding of intended movement direction. This method can be used for real-time BCI applications by using a 4 ms lag between recording and filtering neural signals. Results Across 18 sessions from two people with tetraplegia enrolled in the BrainGate2 pilot clinical trial, we found that threshold crossing events extracted using this non-causal filtering method were significantly more informative of each participant’s intended cursor kinematics compared to threshold crossing events derived from causally filtered signals. This new method decreased the mean angular error between the intended and decoded cursor direction by 9.7° for participant S3, who was implanted 5.4 years prior to this study, and by 3.5° for participant T2, who was implanted 3 months prior to this study. PMID:25681017

  3. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  4. A complementary graphical method for reducing and analyzing large data sets. Case studies demonstrating thresholds setting and selection.

    PubMed

    Jing, X; Cimino, J J

    2014-01-01

    Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.

  5. X-ray emission from the winds of hot stars

    NASA Technical Reports Server (NTRS)

    Lucy, L. B.; White, R. L.

    1980-01-01

    A phenomenological theory is proposed for the structure of the unstable line-driven winds of early-type stars. These winds are conjectured to break up into a population of blobs that are being radiatively driven through, and confined by ram pressure of an ambient gas that is not itself being radiatively driven. Radiation from the bow shocks preceding the blobs can account for the X-ray luminosity of zeta Puppis. The theory breaks down when used to model the much lower density wind of tau Scorpii, for then the blobs are destroyed by heat conduction from shocked gas. This effect explains why the profiles of this star's UV resonance lines depart from classical P Cygni form.

  6. Blob-level active-passive data fusion for Benthic classification

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady

    2012-06-01

    We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.

  7. Structure-driven turbulence in ``No man's Land''

    NASA Astrophysics Data System (ADS)

    Kosuga, Yusuke; Diamond, Patrick

    2012-10-01

    Structures are often observed in many physical systems. In tokamaks, for example, such structures are observed as density blobs and holes. Such density blobs and holes are generated at the tokamak edge, where strong gradient perturbations generate an outgoing blob and an incoming hole. Since density holes can propagate from the edge to the core, such structures may play an important role in understanding the phenomenology of the edge-core coupling region, so-called ``No Man's Land.'' In this work, we discuss the dynamics of such structures in real space. In particular, we consider the dynamics of density blobs and holes in the Hasegawa-Wakatani system. Specific questions addressed here include: i) how these structures extract free energy and enhance transport? how different is the relaxation driven by such structures from that driven by linear drift waves? ii) how these structures interact with shear flows? In particular, how these structures interact with a shear layer, which can absorb structures resonantly? iii) how can we calculate the coupled evolution of structures and shear flows? Implications for edge-core coupling problem are discussed as well.

  8. Vehicle Detection for RCTA/ANS (Autonomous Navigation System)

    NASA Technical Reports Server (NTRS)

    Brennan, Shane; Bajracharya, Max; Matthies, Larry H.; Howard, Andrew B.

    2012-01-01

    Using a stereo camera pair, imagery is acquired and processed through the JPLV stereo processing pipeline. From this stereo data, large 3D blobs are found. These blobs are then described and classified by their shape to determine which are vehicles and which are not. Prior vehicle detection algorithms are either targeted to specific domains, such as following lead cars, or are intensity- based methods that involve learning typical vehicle appearances from a large corpus of training data. In order to detect vehicles, the JPL Vehicle Detection (JVD) algorithm goes through the following steps: 1. Take as input a left disparity image and left rectified image from JPLV stereo. 2. Project the disparity data onto a two-dimensional Cartesian map. 3. Perform some post-processing of the map built in the previous step in order to clean it up. 4. Take the processed map and find peaks. For each peak, grow it out into a map blob. These map blobs represent large, roughly vehicle-sized objects in the scene. 5. Take these map blobs and reject those that do not meet certain criteria. Build descriptors for the ones that remain. Pass these descriptors onto a classifier, which determines if the blob is a vehicle or not. The probability of detection is the probability that if a vehicle is present in the image, is visible, and un-occluded, then it will be detected by the JVD algorithm. In order to estimate this probability, eight sequences were ground-truthed from the RCTA (Robotics Collaborative Technology Alliances) program, totaling over 4,000 frames with 15 unique vehicles. Since these vehicles were observed at varying ranges, one is able to find the probability of detection as a function of range. At the time of this reporting, the JVD algorithm was tuned to perform best at cars seen from the front, rear, or either side, and perform poorly on vehicles seen from oblique angles.

  9. Equilibrating high-molecular-weight symmetric and miscible polymer blends with hierarchical back-mapping.

    PubMed

    Ohkuma, Takahiro; Kremer, Kurt; Daoulas, Kostas

    2018-05-02

    Understanding properties of polymer alloys with computer simulations frequently requires equilibration of samples comprised of microscopically described long molecules. We present the extension of an efficient hierarchical backmapping strategy, initially developed for homopolymer melts, to equilibrate high-molecular-weight binary blends. These mixtures present significant interest for practical applications and fundamental polymer physics. In our approach, the blend is coarse-grained into models representing polymers as chains of soft blobs. Each blob stands for a subchain with N b microscopic monomers. A hierarchy of blob-based models with different resolution is obtained by varying N b . First the model with the largest N b is used to obtain an equilibrated blend. This configuration is sequentially fine-grained, reinserting at each step the degrees of freedom of the next in the hierarchy blob-based model. Once the blob-based description is sufficiently detailed, the microscopic monomers are reinserted. The hard excluded volume is recovered through a push-off procedure and the sample is re-equilibrated with molecular dynamics (MD), requiring relaxation on the order of the entanglement time. For the initial method development we focus on miscible blends described on microscopic level through a generic bead-spring model, which reproduces hard excluded volume, strong covalent bonds, and realistic liquid density. The blended homopolymers are symmetric with respect to molecular architecture and liquid structure. To parameterize the blob-based models and validate equilibration of backmapped samples, we obtain reference data from independent hybrid simulations combining MD and identity exchange Monte Carlo moves, taking advantage of the symmetry of the blends. The potential of the backmapping strategy is demonstrated by equilibrating blend samples with different degree of miscibility, containing 500 chains with 1000 monomers each. Equilibration is verified by comparing chain conformations and liquid structure in backmapped blends with the reference data. Possible directions for further methodological developments are discussed.

  10. Equilibrating high-molecular-weight symmetric and miscible polymer blends with hierarchical back-mapping

    NASA Astrophysics Data System (ADS)

    Ohkuma, Takahiro; Kremer, Kurt; Daoulas, Kostas

    2018-05-01

    Understanding properties of polymer alloys with computer simulations frequently requires equilibration of samples comprised of microscopically described long molecules. We present the extension of an efficient hierarchical backmapping strategy, initially developed for homopolymer melts, to equilibrate high-molecular-weight binary blends. These mixtures present significant interest for practical applications and fundamental polymer physics. In our approach, the blend is coarse-grained into models representing polymers as chains of soft blobs. Each blob stands for a subchain with N b microscopic monomers. A hierarchy of blob-based models with different resolution is obtained by varying N b. First the model with the largest N b is used to obtain an equilibrated blend. This configuration is sequentially fine-grained, reinserting at each step the degrees of freedom of the next in the hierarchy blob-based model. Once the blob-based description is sufficiently detailed, the microscopic monomers are reinserted. The hard excluded volume is recovered through a push-off procedure and the sample is re-equilibrated with molecular dynamics (MD), requiring relaxation on the order of the entanglement time. For the initial method development we focus on miscible blends described on microscopic level through a generic bead-spring model, which reproduces hard excluded volume, strong covalent bonds, and realistic liquid density. The blended homopolymers are symmetric with respect to molecular architecture and liquid structure. To parameterize the blob-based models and validate equilibration of backmapped samples, we obtain reference data from independent hybrid simulations combining MD and identity exchange Monte Carlo moves, taking advantage of the symmetry of the blends. The potential of the backmapping strategy is demonstrated by equilibrating blend samples with different degree of miscibility, containing 500 chains with 1000 monomers each. Equilibration is verified by comparing chain conformations and liquid structure in backmapped blends with the reference data. Possible directions for further methodological developments are discussed.

  11. Morphological ultrasound types known as 'blob' and 'bagel' signs should be reclassified from suggesting probable to indicating definite tubal ectopic pregnancy.

    PubMed

    Nadim, B; Infante, F; Lu, C; Sathasivam, N; Condous, G

    2018-04-01

    In a recent consensus statement on early pregnancy nomenclature by Barnhart, a definite ectopic pregnancy (EP) was defined morphologically on transvaginal sonography (TVS) as an extrauterine gestational sac with yolk sac and/or embryo, with or without cardiac activity, whilst a probable EP was defined as an inhomogeneous adnexal mass ('blob' sign) or extrauterine sac-like structure ('bagel' sign). This study aims to determine whether these ultrasound markers used to define probable EP can be used to predict a definite tubal EP. This was a retrospective cohort study of women presenting to the Early Pregnancy Unit (EPU) at Nepean Hospital, Sydney, Australia between November 2006 and June 2016. Women classified with a probable EP or a pregnancy of unknown location (PUL), i.e. with no signs of extra- or intrauterine pregnancy (IUP), at their first TVS were included, whilst those with a definite tubal EP, IUP or non-tubal EP were excluded from the final analysis. The gold standard for tubal EP was histological confirmation of chorionic villi in Fallopian tube removed at laparoscopy. The performance of blob or bagel sign on TVS in the prediction of definite tubal EP was evaluated in terms of sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). This was compared with the performance of extrauterine gestational sac with yolk sac and/or embryo on TVS to predict definite tubal EP. During the study period, 7490 consecutive women attended the EPU, of whom 849 were analyzed. At primary TVS, 240/849 were diagnosed with probable EP, of which 174 (72.5%) were classified as blob sign and 66 (27.5%) as bagel sign. The remaining 609/849 were diagnosed with PUL, of which 47 had a final diagnosis of EP (including 24 blob sign, 19 bagel sign and four gestational sac with embryo/yolk sac). 101 of all 198 (51%) blob sign cases and 50 of all 85 (59%) bagel sign cases underwent laparoscopy and salpingectomy; histology proved a tubal EP in 98 (97%) of these blob-sign cases and 48 (96.0%) of the bagel-sign cases. The sensitivity for the blob and bagel signs in the prediction of definite tubal EP was 89.8% and 83.3%, respectively, the specificity was 99.5% and 99.6%, PPV was 96.7% and 95.2% and NPV was 98.3% and 98.6%. This was comparable to the sensitivity of extrauterine gestational sac with yolk sac and/or embryo on TVS in the prediction of definite tubal EP (sensitivity, 84.0%; specificity, 99.9%; PPV, 97.7%; NPV, 99.3% (P = 0.5)). Blob and bagel signs seem to be the most common presentations of a tubal EP on TVS. Although they cannot be considered as a definitive sign of EP, their PPV is very high (> 95%); such women should therefore be considered at very high risk for having a tubal EP and should be treated as such. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd.

  12. ALMA OBSERVATIONS OF Ly α BLOB 1: HALO SUBSTRUCTURE ILLUMINATED FROM WITHIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geach, J. E.; Narayanan, D.; Matsuda, Y.

    2016-11-20

    We present new Atacama Large Millimeter/Submillimeter Array (ALMA) 850 μ m continuum observations of the original Ly α Blob (LAB) in the SSA22 field at z = 3.1 (SSA22-LAB01). The ALMA map resolves the previously identified submillimeter source into three components with a total flux density of S {sub 850} = 1.68 ± 0.06 mJy, corresponding to a star-formation rate of ∼150 M {sub ⊙} yr{sup -1}. The submillimeter sources are associated with several faint ( m ≈ 27 mag) rest-frame ultraviolet sources identified in Hubble Space Telescope Imaging Spectrograph (STIS) clear filter imaging ( λ ≈ 5850 Å). Onemore » of these companions is spectroscopically confirmed with the Keck Multi-Object Spectrometer For Infra-Red Exploration to lie within 20 projected kpc and 250 km s{sup -1} of one of the ALMA components. We postulate that some of these STIS sources represent a population of low-mass star-forming satellites surrounding the central submillimeter sources, potentially contributing to their growth and activity through accretion. Using a high-resolution cosmological zoom simulation of a 10{sup 13} M {sub ⊙} halo at z = 3, including stellar, dust, and Ly α radiative transfer, we can model the ALMA+STIS observations and demonstrate that Ly α photons escaping from the central submillimeter sources are expected to resonantly scatter in neutral hydrogen, the majority of which is predicted to be associated with halo substructure. We show how this process gives rise to extended Ly α emission with similar surface brightness and morphology to observed giant LABs.« less

  13. Comparison of algorithms for automatic border detection of melanoma in dermoscopy images

    NASA Astrophysics Data System (ADS)

    Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert

    2016-09-01

    Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.

  14. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    PubMed

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  15. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE PAGES

    Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...

    2018-04-06

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  16. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Qiang; Du, Qizhen; Gong, Xufei

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  17. Median filters as a tool to determine dark noise thresholds in high resolution smartphone image sensors for scientific imaging

    NASA Astrophysics Data System (ADS)

    Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.

    2018-01-01

    An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.

  18. Prediction of load threshold of fibre-reinforced laminated composite panels subjected to low velocity drop-weight impact using efficient data filtering techniques

    NASA Astrophysics Data System (ADS)

    Farooq, Umar; Myler, Peter

    This work is concerned with physical testing of carbon fibrous laminated composite panels with low velocity drop-weight impacts from flat and round nose impactors. Eight, sixteen, and twenty-four ply panels were considered. Non-destructive damage inspections of tested specimens were conducted to approximate impact-induced damage. Recorded data were correlated to load-time, load-deflection, and energy-time history plots to interpret impact induced damage. Data filtering techniques were also applied to the noisy data that unavoidably generate due to limitations of testing and logging systems. Built-in, statistical, and numerical filters effectively predicted load thresholds for eight and sixteen ply laminates. However, flat nose impact of twenty-four ply laminates produced clipped data that can only be de-noised involving oscillatory algorithms. Data filtering and extrapolation of such data have received rare attention in the literature that needs to be investigated. The present work demonstrated filtering and extrapolation of the clipped data using Fast Fourier Convolution algorithm to predict load thresholds. Selected results were compared to the damage zones identified with C-scan and acceptable agreements have been observed. Based on the results it is proposed that use of advanced data filtering and analysis methods to data collected by the available resources has effectively enhanced data interpretations without resorting to additional resources. The methodology could be useful for efficient and reliable data analysis and impact-induced damage prediction of similar cases' data.

  19. Examining the interactive effects of ocenaographic and anthropogenic influences with the SST anomaly, or Warm Blob on the bloom response of the toxigenic HAB genus Pseudo-nitzschia in the Santa Barbara Channel.

    NASA Astrophysics Data System (ADS)

    Amiri, S.

    2016-12-01

    Harmful algal blooms (HAB's) include a large subset of toxigenic phytoplankton and microbial species responsible for shutting down major fisheries, impairing water quality and threatening public health. Oceanographic and anthropogenic effects on HAB's in concert with climactic stressors may have interactive effects influencing HAB blooms to persist longer than historically documented. This 3 year time-series explores the interactive effects of the SST anomaly known as the Warm Blob across the coastal Pacific on the bloom progression and persistence of the toxigenic Pseudo-nitzschia bloom across the West Coast, ranging from the gulf of Alaska to the Santa Barbara Channel (SBC). This study also explores direct links of the Warm Blob event on nutrient and oxygen concentrations spatially across the Santa Barbara Channel with the highest levels of domoic acid concentrations recorded from the coast wide mega bloom. MODIS and SeaWIFS Satellite imagery of chlorophyll and SST monthly averaged values of the SBC were identified to better understand the regional distribution of the Warm Blob on phytoplankton community structure. These images were ground truthed with monthly samples from 7 transects across the SBC with the Plumes and Blooms time-series, LTER sites and local pier sites across the Santa Barbara County. Preliminary data suggest an interesting correlation with Pseudo-nitzschia species outcompeting other phytoplankton species within the SBC during the 3 degree averaged increase of SST conditions with the Warm Blob event. *Data is still being processed and results should be analyzed before October 2016.

  20. A catalogue of the small transients observed in STEREO HI-A and their associated in-situ measurements

    NASA Astrophysics Data System (ADS)

    Sanchez-Diaz, Eduardo; Rouillard, Alexis P.; Davies, Jackie A.; Kilpua, Emilia; Plotnikov, Illya

    2017-04-01

    The systematic monitoring of the solar wind in high-cadence and high-resolution heliospheric images taken by the Solar-Terrestrial Relation Observatory (STEREO) spacecraft permits the study of the spatial and temporal evolution of variable solar wind flows from the Sun out to 1 AU, and beyond. As part of the EU Framework 7 (FP7) Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) project, Plotnikov et al. (2016) created a catalogue of 190 Stream Interaction Regions (SIRs) well-observed in images taken by the Heliospheric Imager (HI) instruments onboard STEREO-A (ST-A). This catalogue has been made available on line on the official HELCATS website (https://www.helcats-fp7.eu/catalogues/wp5_cat.html) and included in the propagation tool (http://propagationtool.cdpp.eu). Several transients, known as blobs, are observed entrained in each SIR. We complete this catalogue with the trajectory of individual blobs and with the latitudinal extent of the SIR. For every SIR we report whether the trajectory of any of the entrained blob impacts a spacecraft in the Heliosphere. For the cases where a blob is predicted to impact one or more spacecraft, we include in the catalogue the predicted arrival time and the date and time of the visually recognized blob which is the closest to the predicted arrival time. This new catalogue was also made available on line on the HELCATS project website. This work was made with the funding from the HELCATS project under the FP7 EU contract number 606692.

  1. In-Network Processing of an Iceberg Join Query in Wireless Sensor Networks Based on 2-Way Fragment Semijoins

    PubMed Central

    Kang, Hyunchul

    2015-01-01

    We investigate the in-network processing of an iceberg join query in wireless sensor networks (WSNs). An iceberg join is a special type of join where only those joined tuples whose cardinality exceeds a certain threshold (called iceberg threshold) are qualified for the result. Processing such a join involves the value matching for the join predicate as well as the checking of the cardinality constraint for the iceberg threshold. In the previous scheme, the value matching is carried out as the main task for filtering non-joinable tuples while the iceberg threshold is treated as an additional constraint. We take an alternative approach, meeting the cardinality constraint first and matching values next. In this approach, with a logical fragmentation of the join operand relations on the aggregate counts of the joining attribute values, the optimal sequence of 2-way fragment semijoins is generated, where each fragment semijoin employs a Bloom filter as a synopsis of the joining attribute values. This sequence filters non-joinable tuples in an energy-efficient way in WSNs. Through implementation and a set of detailed experiments, we show that our alternative approach considerably outperforms the previous one. PMID:25774710

  2. Denoising solar radiation data using coiflet wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my; Janier, Josefina B., E-mail: josefinajanier@petronas.com.my; Muthuvalu, Mohana Sundaram, E-mail: mohana.muthuvalu@petronas.com.my

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuatesmore » according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.« less

  3. Indicator Expansion with Analysis Pipeline

    DTIC Science & Technology

    2015-01-13

    INTERNAL FILTER trackInfectedHosts FILTER badTraffic SIP infectedHosts 1 DAY END INTERNAL FILTER 11 Step 3 watch where infected hosts go FILTER...nonWhiteListPostInfected SIP IN LIST infectedHosts DIP NOT IN LIST safePopularIPs.set END FILTER 12 Step 4 & 5: Count Hosts Per IP and Alert EVALUATION...CHECK THRESHOLD DISTINCT SIP > 50 TIME WINDOW 36 HOURS END CHECK END EVALUATION 13 Step 6: Report Expanded Indicators LIST CONFIGURATION secondLevelIPs

  4. Chlorine residuals and haloacetic acid reduction in rapid sand filtration.

    PubMed

    Chuang, Yi-Hsueh; Wang, Gen-Shuch; Tung, Hsin-hsin

    2011-11-01

    It is quite rare to find biodegradation in rapid sand filtration for drinking water treatment. This might be due to frequent backwashes and low substrate levels. High chlorine concentrations may inhibit biofilm development, especially for plants with pre-chlorination. However, in tropical or subtropical regions, bioactivity on the sand surface may be quite significant due to high biofilm development--a result of year-round high temperature. The objective of this study is to explore the correlation between biodegradation and chlorine concentration in rapid sand filters, especially for the water treatment plants that practise pre-chlorination. In this study, haloacetic acid (HAA) biodegradation was found in conventional rapid sand filters practising pre-chlorination. Laboratory column studies and field investigations were conducted to explore the association between the biodegradation of HAAs and chlorine concentrations. The results showed that chlorine residual was an important factor that alters bioactivity development. A model based on filter influent and effluent chlorine was developed for determining threshold chlorine for biodegradation. From the model, a temperature independent chlorine concentration threshold (Cl(threshold)) for biodegradation was estimated at 0.46-0.5mgL(-1). The results imply that conventional filters with adequate control could be conducive to bioactivity, resulting in lower HAA concentrations. Optimizing biodegradable disinfection by-product removal in conventional rapid sand filter could be achieved with minor variation and a lower-than-Cl(threshold) influent chlorine concentration. Bacteria isolation was also carried out, successfully identifying several HAA degraders. These degraders are very commonly seen in drinking water systems and can be speculated as the main contributor of HAA loss. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Decoding synchronized oscillations within the brain: phase-delayed inhibition provides a robust mechanism for creating a sharp synchrony filter.

    PubMed

    Patel, Mainak; Joshi, Badal

    2013-10-07

    The widespread presence of synchronized neuronal oscillations within the brain suggests that a mechanism must exist that is capable of decoding such activity. Two realistic designs for such a decoder include: (1) a read-out neuron with a high spike threshold, or (2) a phase-delayed inhibition network motif. Despite requiring a more elaborate network architecture, phase-delayed inhibition has been observed in multiple systems, suggesting that it may provide inherent advantages over simply imposing a high spike threshold. In this work, we use a computational and mathematical approach to investigate the efficacy of the phase-delayed inhibition motif in detecting synchronized oscillations. We show that phase-delayed inhibition is capable of creating a synchrony detector with sharp synchrony filtering properties that depend critically on the time course of inputs. Additionally, we show that phase-delayed inhibition creates a synchrony filter that is far more robust than that created by a high spike threshold. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ensslin, Torsten A.; Frommert, Mona

    2011-05-15

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less

  7. Impulsive penetration : a viable mechanism for plasma entry across the magnetopause ?

    NASA Astrophysics Data System (ADS)

    De Keyser, Johan; Echim, Marius; Darrouzet, Fabien; Gunell, Herbert

    Density inhomogeneities in the solar wind may cross the bow shock, and retain an excess earthward momentum in the magnetosheath upon approaching the magnetopause. Also, the bow shock dynamics as well as the behaviour of the magnetopause itself may introduce spatial inhomogeneities in the magnetosheath density and/or flow. Plasma entities with excess momentum may penetrate across the magnetopause, by the impulsive penetration mechanism. This plasma entry mechanism requires the existence of a polarization electric field in the moving blob, that is sustained by charge separation layers in the interfaces at the flanks of the blob. Both direct observation and simulation of plasma entry across the magnetopause following the impulsive penetration mechanism are hard. It is difficult to prove that observed plasma entry is really due to the impulsive penetration mechanism since the required charge separation layers or the resulting polarization electric field are hard to measure directly. Simply assessing the geometry is not easy, although multi-spacecraft missions like Cluster have resolved many of the ambiguities inherent in single-spacecraft measurements. Impulsive penetration is difficult to simulate as it operates on the fluid, the ion, and the electron scales simultaneously. It requires not only a high spatial resolution, but also a high precision to properly represent the charge imbalance in the flank interfaces. We have modelled impulsive penetration with a kinetic model, by simplifying the problem. The fully kinetic model is 3-dimensional in velocity space, but we consider spatial structure only along a single spatial dimension, namely the coordinate transverse to the blob’s direction of motion. We thereby assume that the blob is elongated both along the magnetic field and in the direction of motion. The model is semi-analytic and is able to represent the charge imbalance in the blob edges very well. In a second modelling step, we consider a slow, quasi-static change of this structure as the blob penetrates deeper into the magnetosphere, resulting in a description of the evolution of the penetrating plasma blob as a consequence of both adiabatic and non-adiabatic deceleration. Although the simulation considers this a simplified geometry, it sheds some light on some fundamental aspects of this plasma entry mechanism.

  8. THE OFF-CENTERED SEYFERT-LIKE COMPACT EMISSION IN THE NUCLEAR REGION OF NGC 3621

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menezes, R. B.; Steiner, J. E.; Silva, Patricia da, E-mail: robertobm@astro.iag.usp.br

    2016-02-01

    We analyze an optical data cube of the nuclear region of NGC 3621, taken with the integral field unit of the Gemini Multi-object Spectrograph. We found that the previously detected central line emission in this galaxy actually comes from a blob, located at a projected distance of 2.″14 ± 0.″08 (70.1 ± 2.6 pc) from the stellar nucleus. Only diffuse emission was detected in the rest of the field of view, with a deficit of emission at the position of the stellar nucleus. Diagnostic diagram analysis reveals that the off-centered emitting blob has a Seyfert 2 spectrum. We propose that the line-emitting blob maymore » be a “fossil” emission-line region or a light “echo” from an active galactic nucleus (AGN), which was significantly brighter in the past. Our estimates indicate that the bolometric luminosity of the AGN must have decreased by a factor of ∼13–500 during the past ∼230 yr. A second scenario to explain the morphology of the line-emitting areas in the nuclear region of NGC 3621 involves no decrease of the AGN bolometric luminosity and establishes that the AGN is highly obscured toward the observer but not toward the line-emitting blob. The third scenario proposed here assumes that the off-centered line-emitting blob is a recoiling supermassive black hole, after the coalescence of two black holes. Finally, an additional hypothesis is that the central X-ray source is not an AGN, but an X-ray binary. This idea is consistent with all the scenarios we proposed.« less

  9. Formation and Evolution of a Multi-Threaded Prominence

    NASA Technical Reports Server (NTRS)

    Luna, M.; Karpen, J. T.; DeVore, C. R.

    2012-01-01

    We investigate the process of formation and subsequent evolution of prominence plasma in a filament channel and its overlying arcade. We construct a three-dimensional time-dependent model of a filament-channel prominence suitable to be compared with observations. We combine this magnetic field structure with one-dimensional independent simulations of many flux tubes. The magnetic structure is a three-dimensional sheared double arcade, and the thermal non-equilibrium process governs the plasma evolution. We have found that the condensations in the corona can be divided into two populations: threads and blobs. Threads are massive condensations that linger in the field line dips. Blobs are ubiquitous small condensations that are produced throughout the filament and overlying arcade magnetic structure, and rapidly fall to the chromosphere. The total prominence mass is in agreement with observations. The threads are the principal contributors to the total mass, whereas the blob contribution is small. The motion of the threads is basically horizontal, while blobs move in all directions along the field. The peak velocities for both populations are comparable, but there is a weak tendency for the velocity to increase with the inclination, and the blobs with motion near vertical have the largest values of the velocity. We have generated synthetic images of the whole structure in an H proxy and in two EUV channels of the AIA instrument aboard SDO. These images show the plasma at cool, warm and hot temperatures. The theoretical differential emission measure of our system agrees very well with observations in the temperature range log T = 4.6-5.7. We conclude that the sheared-arcade magnetic structure and plasma dynamics fit well the abundant observational evidence.

  10. Pre-Capture Privacy for Small Vision Sensors.

    PubMed

    Pittaluga, Francesco; Koppal, Sanjeev Jagannatha

    2017-11-01

    The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.

  11. Fast variations in the ultraviolet resonance lines of Alpha Camelopardalis (O9.5 Ia) - Evidence for blobs in the wind

    NASA Technical Reports Server (NTRS)

    Lamers, Henry J. G. L. M.; Snow, Theodore P.; De Jager, Cornelis; Langerwerf, A.

    1988-01-01

    The 72 IUE spectra of Alpha Cam and 19 IUE spectra of Kappa Cas, obtained during 72 hours of continuous IUE time in September 1978 were searched for variations in the profiles of the resonance lines of Si IV, C IV, and N V, and the results are discussed. The UV resonance lines in the spectra of Alpha Cam showed variations at the 2 percent level near -1800, -700, and +700 km/s. The first two variations can be explained by absorption components of outward-accelerated blobs or shells with an average acceleration of 1.5 cm/sq s. The characteristics of the blobs and shells are discussed, including the column densities and masses. No variations were found in the spectra of Kappa Cas.

  12. Internal absorption of gamma-rays in relativistic blobs of active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Sitarek, Julian; Bednarek, Wlodek

    2007-06-01

    We investigate the production of gamma-rays in the inverse Compton (IC) scattering process by leptons accelerated inside relativistic blobs in jets of active galactic nuclei. Leptons are injected homogeneously inside the spherical blob and initiate IC e ± pair cascade in the synchrotron radiation (produced by the same population of leptons, SSC model), provided that the optical depth for gamma-rays is larger than unity. It is shown that for likely parameters internal absorption of gamma-rays has to be important. We suggest that new type of blazars might be discovered by the future simultaneous X-ray and γ-ray observations, showing peak emissions in the hard X-rays, and in the GeV γ-rays. Moreover, the considered scenario might be also responsible for the orphan X-ray flares recently reported from BL Lac type active galaxies.

  13. Theory for the alignment of cortical feature maps during development.

    PubMed

    Bressloff, Paul C; Oster, Andrew M

    2010-08-01

    We present a developmental model of ocular dominance column formation that takes into account the existence of an array of intrinsically specified cytochrome oxidase blobs. We assume that there is some molecular substrate for the blobs early in development, which generates a spatially periodic modulation of experience-dependent plasticity. We determine the effects of such a modulation on a competitive Hebbian mechanism for the modification of the feedforward afferents from the left and right eyes. We show how alternating left and right eye dominated columns can develop, in which the blobs are aligned with the centers of the ocular dominance columns and receive a greater density of feedforward connections, thus becoming defined extrinsically. More generally, our results suggest that the presence of periodically distributed anatomical markers early in development could provide a mechanism for the alignment of cortical feature maps.

  14. Discontinuous pore fluid distribution under microgravity--KC-135 flight investigations

    NASA Technical Reports Server (NTRS)

    Reddi, Lakshmi N.; Xiao, Ming; Steinberg, Susan L.

    2005-01-01

    Designing a reliable plant growth system for crop production in space requires the understanding of pore fluid distribution in porous media under microgravity. The objective of this experimental investigation, which was conducted aboard NASA KC-135 reduced gravity flight, is to study possible particle separation and the distribution of discontinuous wetting fluid in porous media under microgravity. KC-135 aircraft provided gravity conditions of 1, 1.8, and 10(-2) g. Glass beads of a known size distribution were used as porous media; and Hexadecane, a petroleum compound immiscible with and lighter than water, was used as wetting fluid at residual saturation. Nitrogen freezer was used to solidify the discontinuous Hexadecane ganglia in glass beads to preserve the ganglia size changes during different gravity conditions, so that the blob-size distributions (BSDs) could be measured after flight. It was concluded from this study that microgravity has little effect on the size distribution of pore fluid blobs corresponding to residual saturation of wetting fluids in porous media. The blobs showed no noticeable breakup or coalescence during microgravity. However, based on the increase in bulk volume of samples due to particle separation under microgravity, groups of particles, within which pore fluid blobs were encapsulated, appeared to have rearranged themselves under microgravity.

  15. Central powering of the largest Lyman-α nebula is revealed by polarized radiation.

    PubMed

    Hayes, Matthew; Scarlata, Claudia; Siana, Brian

    2011-08-17

    High-redshift Lyman-α (Lyα) blobs are extended, luminous but rare structures that seem to be associated with the highest peaks in the matter density of the Universe. Their energy output and morphology are similar to those of powerful radio galaxies, but the source of the luminosity is unclear. Some blobs are associated with ultraviolet or infrared bright galaxies, suggesting an extreme starburst event or accretion onto a central black hole. Another possibility is gas that is shock-excited by supernovae. But not all blobs are associated with galaxies, and these ones may instead be heated by gas falling into a dark-matter halo. The polarization of the Lyα emission can in principle distinguish between these options, but a previous attempt to detect this signature returned a null detection. Here we report observations of polarized Lyα from the blob LAB1 (ref. 2). Although the central region shows no measurable polarization, the polarized fraction (P) increases to ∼20 per cent at a radius of 45 kiloparsecs, forming an almost complete polarized ring. The detection of polarized radiation is inconsistent with the in situ production of Lyα photons, and we conclude that they must have been produced in the galaxies hosted within the nebula, and re-scattered by neutral hydrogen.

  16. A systematic coarse-graining strategy for semi-dilute copolymer solutions: from monomers to micelles.

    PubMed

    Capone, Barbara; Coluzza, Ivan; Hansen, Jean-Pierre

    2011-05-18

    A systematic coarse-graining procedure is proposed for the description and simulation of AB diblock copolymers in selective solvents. Each block is represented by a small number, n(A) or n(B), of effective segments or blobs, containing a large number of microscopic monomers. n(A) and n(B) are unequivocally determined by imposing that blobs do not, on average, overlap, even if complete copolymer coils interpenetrate (semi-dilute regime). Ultra-soft effective interactions between blobs are determined by a rigorous inversion procedure in the low concentration limit. The methodology is applied to an athermal copolymer model where A blocks are ideal (theta solvent), B blocks self-avoiding (good solvent), while A and B blocks are mutually avoiding. The model leads to aggregation into polydisperse spherical micelles beyond a critical micellar concentration determined by Monte Carlo simulations for several size ratios f of the two blocks. The simulations also provide accurate estimates of the osmotic pressure and of the free energy of the copolymer solutions over a wide range of concentrations. The mean micellar aggregation numbers are found to be significantly lower than those predicted by an earlier, minimal two-blob representation (Capone et al 2009 J. Phys. Chem. B 113 3629).

  17. 3-D shape estimation of DNA molecules from stereo cryo-electron micro-graphs using a projection-steerable snake.

    PubMed

    Jacob, Mathews; Blu, Thierry; Vaillant, Cedric; Maddocks, John H; Unser, Michael

    2006-01-01

    We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well.

  18. XMM-Newton studies of the supernova remnant G350.0-2.0

    NASA Astrophysics Data System (ADS)

    Karpova, A.; Shternin, P.; Zyuzin, D.; Danilenko, A.; Shibanov, Yu.

    2016-11-01

    We report the results of XMM-Newton observations of the Galactic mixed-morphology supernova remnant G350.0-2.0. Diffuse thermal X-ray emission fills the north-western part of the remnant surrounded by radio shell-like structures. We did not detect any X-ray counterpart of the latter structures, but found several bright blobs within the diffuse emission. The X-ray spectrum of the most part of the remnant can be described by a collisionally ionized plasma model VAPEC with solar abundances and a temperature of ≈0.8 keV. The solar abundances of plasma indicate that the X-ray emission comes from the shocked interstellar material. The overabundance of Fe was found in some of the bright blobs. We also analysed the brightest point-like X-ray source 1RXS J172653.4-382157 projected on the extended emission. Its spectrum is well described by the two-temperature optically thin thermal plasma model MEKAL typical for cataclysmic variable stars. The cataclysmic variable source nature is supported by the presence of a faint (g ≈ 21) optical source with non-stellar spectral energy distribution at the X-ray position of 1RXS J172653.4-382157. It was detected with the XMM-Newton optical/UV monitor in the U filter and was also found in the archival Hα and optical/near-infrared broad-band sky survey images. On the other hand, the X-ray spectrum is also described by the power law plus thermal component model typical for a rotation powered pulsar. Therefore, the pulsar interpretation of the source cannot be excluded. For this source, we derived the upper limit for the pulsed fraction of 27 per cent.

  19. The quasi-periodic oscillations and very low frequency noise of Scorpius X-1 as transient chaos - A dripping handrail?

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.; Steiman-Cameron, Thomas; Young, Karl; Donoho, David L.; Crutchfield, James P.; Imamura, James

    1993-01-01

    We present evidence that the quasi-periodic oscillations (QPO) and very low frequency noise (VLFN) characteristic of many accretion sources are different aspects of the same physical process. We analyzed a long, high time resolution EXOSAT observation of the low-mass X-ray binary (LMXB) Sco X-1. The X-ray luminosity varies stochastically on time scales from milliseconds to hours. The nature of this variability - as quantified with both power spectrum analysis and a new wavelet technique, the scalegram - agrees well with the dripping handrail accretion model, a simple dynamical system which exhibits transient chaos. In this model both the QPO and VLFN are produced by radiation from blobs with a wide size distribution, resulting from accretion and subsequent diffusion of hot gas, the density of which is limited by an unspecified instability to lie below a threshold.

  20. Enhanced thermomechanical stability on laser-induced damage by functionally graded layers in quasi-rugate filters

    NASA Astrophysics Data System (ADS)

    Pu, Yunti; Ma, Ping; Lv, Liang; Zhang, Mingxiao; Lu, Zhongwen; Qiao, Zhao; Qiu, Fuming

    2018-05-01

    Ta2O5-SiO2 quasi-rugate filters with a reasonable optimization of rugate notch filter design were prepared by ion-beam sputtering. The optical properties and laser-induced damage threshold are studied. Compared with the spectrum of HL-stacks, the spectrum of quasi-rugate filters have weaker second harmonic peaks and narrower stopbands. According to the effect of functionally graded layers (FGLs), 1-on-1 and S-on-1 Laser induced damage threshold (LIDT) of quasi-rugate filters are about 22% and 50% higher than those of HL stacks, respectively. Through the analysis of the damage morphologies, laser-induced damage of films under nanosecond multi-pulse are dominated by a combination of thermal shock stress and thermomechanical instability due to nodules. Compared with catastrophic damages, the damage sits of quasi-rugate filters are developed in a moderate way. The damage growth behavior of defect-induced damage sites have been effectively restrained by the structure of FGLs. Generally, FGLs are used to reduce thermal stress by the similar thermal-expansion coefficients of neighboring layers and solve the problems such as instability and cracking raised by the interface discontinuity of nodular boundaries, respectively.

  1. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  2. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    PubMed

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Isotope effect on blob-statistics in gyrofluid simulations of scrape-off layer turbulence

    NASA Astrophysics Data System (ADS)

    Meyer, O. H. H.; Kendl, A.

    2017-12-01

    In this contribution we apply a recently established stochastic model for scrape-off layer fluctuations to long time series obtained from gyrofluid simulations of fusion edge plasma turbulence. Characteristic parameters are estimated for different fusion relevant isotopic compositions (protium, deuterium, tritium and singly charged helium) by means of conditional averaging. It is shown that large amplitude fluctuations associated with radially propagating filaments in the scrape-off layer feature double-exponential wave-forms. We find increased pulse duration and longer waiting times between peaks for heavier ions, while the amplitudes are similar. The associated radial blob velocity is shown to be reduced for heavier ions. A parabolic relation between skewness and kurtosis of density fluctuations seems to be present. Improved particle confinement in terms of reduced mean value close to the outermost radial boundary and blob characteristics for heavier plasmas is presented.

  4. Intermittent turbulence and turbulent structures in LAPD and ET

    NASA Astrophysics Data System (ADS)

    Carter, T. A.; Pace, D. C.; White, A. E.; Gauvreau, J.-L.; Gourdain, P.-A.; Schmitz, L.; Taylor, R. J.

    2006-12-01

    Strongly intermittent turbulence is observed in the shadow of a limiter in the Large Plasma Device (LAPD) and in both the inboard and outboard scrape-off-layer (SOL) in the Electric Tokamak (ET) at UCLA. In LAPD, the amplitude probability distribution function (PDF) of the turbulence is strongly skewed, with density depletion events (or "holes") dominant in the high density region and density enhancement events (or "blobs") dominant in the low density region. Two-dimensional cross-conditional averaging shows that the blobs are detached, outward-propagating filamentary structures with a clear dipolar potential while the holes appear to be part of a more extended turbulent structure. A statistical study of the blobs reveals a typical size of ten times the ion sound gyroradius and a typical velocity of one tenth the sound speed. In ET, intermittent turbulence is observed on both the inboard and outboard midplane.

  5. Robust crop and weed segmentation under uncontrolled outdoor illumination

    USDA-ARS?s Scientific Manuscript database

    A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...

  6. Transverse mode control in proton-implanted and oxide-confined VCSELs via patterned dielectric anti-phase filters

    NASA Astrophysics Data System (ADS)

    Kesler, Benjamin; O'Brien, Thomas; Dallesasse, John M.

    2017-02-01

    A novel method for controlling the transverse lasing modes in both proton implanted and oxide-confined vertical- cavity surface-emitting lasers (VCSELs) with a multi-layer, patterned, dielectric anti-phase (DAP) filter is pre- sented. Using a simple photolithographic liftoff process, dielectric layers are deposited and patterned on individual VCSELs to modify (increase or decrease) the mirror reflectivity across the emission aperture via anti-phase reflections, creating spatially-dependent threshold material gain. The shape of the dielectric pattern can be tailored to overlap with specific transverse VCSEL modes or subsets of transverse modes to either facilitate or inhibit lasing by decreasing or increasing, respectively, the threshold modal gain. A silicon dioxide (SiO2) and titanium dioxide (TiO2) anti-phase filter is used to achieve a single-fundamental-mode, continuous-wave output power greater than 4.0 mW in an oxide-confined VCSEL at a lasing wavelength of 850 nm. A filter consisting of SiO2 and TiO2 is used to facilitate injection-current-insensitive fundamental mode and lower order mode lasing in proton implanted VCSELs at a lasing wavelength of 850 nm. Higher refractive index dielectric materials such as amorphous silicon (a-Si) can be used to increase the effectiveness of the anti-phase filter on proton implanted devices by reducing the threshold modal gain of any spatially overlapping modes. This additive, non-destructive method allows for mode selection at any lasing wavelength and for any VCSEL layer structure without the need for semiconductor etching or epitaxial regrowth. It also offers the capability of designing a filter based upon available optical coating materials.

  7. Eta Carinae: Orientation of The Orbital Plane

    NASA Technical Reports Server (NTRS)

    Gull, T. R.; Nielsen, K. E.; Ivarsson, S.; Corcoran, M. F.; Verner, E.; Hillier, J. D.

    2006-01-01

    Evidence continues to build that Eta Carinae is a massive binary system with a hidden hot companion in a highly elliptical orbit. We present imaging and spectroscopic evidence that provide clues to the orientation of the orbital plane. The circumstellar ejecta, known as the Homunculus and Little Homunculus, are hourglass-shaped structures, one encapsulated within the other, tilted at about 45 degrees from the sky plane. A disk region lies between the bipolar lobes. Based upon their velocities and proper motions, Weigelt blobs B, C and D, very bright emission clumps 0.1 to 0.3" Northwest from Eta Carinae, lie in the disk. UV flux from the hot companion, Eta Car B, photoexcites the Weigelt blobs. Other clumps form a complete chain around the star, but are not significantly photoexcited. The strontium filament, a 'neutral' emission structure, lies in the same general direction as the Weigelt blobs and exhibits peculiar properties indicative that much mid-UV, but no hydrogen-ionizing radiation impinges on this structure. It is shielded by singly-ionized iron. P Cygni absorptions in Fe I I lines, seen directly in line of sight from Eta Carinae, are absent in the stellar light scattered by the Weigelt blobs. Rather than a strong absorption extending to -600 km/s, a low velocity absorption feature extends from -40 to -150 km/s. No absorbing Fe II exists between Eta Carinae and Weigelt D, but the outer reaches of the wind are intercepted in line of sight from Weigelt D to the observer. This indicates that the UV radiation is constrained by the dominating wind of Eta Car A to a small cavity carved out by the weaker wind of Eta Car B. Since the high excitation nebular lines are seen in the Weigelt blobs at most phases, the cavity, and hence the major axis of the highly elliptical orbit, must lie in the general direction of the Weigelt blobs. The evidence is compelling that the orbital major axis of Eta Carinae is projected at -45 degrees position angle on the sky. Moreover the milliarcsecond-scale extended structure of Eta Carinae, recently detected by VLTI, may be evidence of the binary companion in the disk plane, not necessarily of a single star as a prolate spheroid extending along the ejecta polar axis.

  8. Experimental and theoretical investigations concerning a frequency filter behavior of the human retina regarding electric pulse currents. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Meier-Koll, A.

    1979-01-01

    Investigation involving patients with injuries in the visual nervous system are discussed. This led to the identification of the epithelial ganglion of the retina as a frequency filter. Threshold curves of the injured visual organs were compared with threshold curves obtained with a control group as a basis for identification. A model which considers the epithelial ganglion as a homogeneous cell layer in which adjacent neurons interact is discussed. It is shown the behavior of the cells against alternating exciting currents can be explained.

  9. Real-Time flare detection using guided filter

    NASA Astrophysics Data System (ADS)

    Lin, Jiaben; Deng, Yuanyong; Yuan, Fei; Guo, Juan

    2017-04-01

    A procedure is introduced for the automatic detection of solar flare using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. And then we adopt guided filter, which is first introduced into the astronomical image detection, to enhance the edges of flares and restrain the solar limb darkening. Flares are then detected by modified Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedure, the new procedure has some advantages such as real time and reliability as well as no need of image division and local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result of flares detection shows that the number of flares detected by our procedure is well consistent with the manual one.

  10. Automatic detection of solar features in HSOS full-disk solar images using guided filter

    NASA Astrophysics Data System (ADS)

    Yuan, Fei; Lin, Jiaben; Guo, Jingjing; Wang, Gang; Tong, Liyue; Zhang, Xinwei; Wang, Bingxiang

    2018-02-01

    A procedure is introduced for the automatic detection of solar features using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. Guided filter is adopted to enhance the edges of solar features and restrain the solar limb darkening, which is first introduced into the astronomical target detection. Then specific features are detected by Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedures, our procedure has some advantages such as real time and reliability as well as no need of local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result shows that the number of features detected by our procedure is well consistent with the manual one.

  11. Automated railroad reconstruction from remote sensing image based on texture filter

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Lu, Kaixia

    2018-03-01

    Techniques of remote sensing have been improved incredibly in recent years and very accurate results and high resolution images can be acquired. There exist possible ways to use such data to reconstruct railroads. In this paper, an automated railroad reconstruction method from remote sensing images based on Gabor filter was proposed. The method is divided in three steps. Firstly, the edge-oriented railroad characteristics (such as line features) in a remote sensing image are detected using Gabor filter. Secondly, two response images with the filtering orientations perpendicular to each other are fused to suppress the noise and acquire a long stripe smooth region of railroads. Thirdly, a set of smooth regions can be extracted by firstly computing global threshold for the previous result image using Otsu's method and then converting it to a binary image based on the previous threshold. This workflow is tested on a set of remote sensing images and was found to deliver very accurate results in a quickly and highly automated manner.

  12. Octave-Band Thresholds for Modeled Reverberant Fields

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)

    1998-01-01

    Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).

  13. Brazil’s Technology Sector

    DTIC Science & Technology

    2006-10-01

    and Innovation in Brazil" (Rio de Janeiro: Agencia da Ciencia e Tecnologia , Ministerio da Cienca e Tecnologia , 2006) <agenciact.mct.gov.br/upd_blob...laboratories, and 27 Ministério da Ciência e Tecnologia , “Relatório Nanotecnologia Investimentos, Resultados e Demandas” (Brasília: 2006), 12. <http...www.mct.gov.br/upd_blob/8075.pdf> (accessed on October 20, 2006). 28 Ministério da Ciência e Tecnologia , “Relatório Nanotecnologia Investimentos

  14. Mean flows and blob velocities in scrape-off layer (SOLT) simulations of an L-mode discharge on Alcator C-Mod

    DOE PAGES

    Russell, D. A.; Myra, J. R.; D'Ippolito, D. A.; ...

    2016-06-10

    Two-dimensional scrape-off layer turbulence (SOLT) code simulations are compared with an L-mode discharge on the Alcator C-Mod tokamak [M. Greenwald, et al., Phys. Plasmas 21, 110501 (2014)]. Density and temperature profiles for the simulations were obtained by smoothly fitting Thomson scattering and mirror Langmuir probe (MLP) data from the shot. Simulations differing in turbulence intensity were obtained by varying a dissipation parameter. Mean flow profiles and density fluctuation amplitudes are consistent with those measured by MLP in the experiment and with a Fourier space diagnostic designed to measure poloidal phase velocity. Blob velocities in the simulations were determined from themore » correlation function for density fluctuations, as in the analysis of gas-puff-imaging (GPI) blobs in the experiment. In the simulations, it was found that larger blobs moved poloidally with the ExB flow velocity, v E , in the near-SOL, while smaller fluctuations moved with the group velocity of the dominant linear (interchange) mode, v E + 1/2 v di, where v di is the ion diamagnetic drift velocity. Comparisons are made with the measured GPI correlation velocity for the discharge. The saturation mechanisms operative in the simulation of the discharge are also discussed. In conclusion, it is found that neither sheared flow nor pressure gradient modification can be excluded as saturation mechanisms.« less

  15. BLOBS IN SPACE: THE LEGACY OF A NOVA

    NASA Technical Reports Server (NTRS)

    2002-01-01

    TThe prolific number of eruptions by the recurrent nova T Pyxidis has attracted the attention of many telescopes. The image on the left, taken by a ground-based telescope, shows shells of gas around the star that were blown off during several eruptions. Closer inspection by the Hubble Space Telescope (right-hand image), however, reveals that the shells are not smooth at all. In fact, this high-resolution image shows that the shells are actually more than 2,000 gaseous blobs packed into an area that is 1 light-year across. Resembling shrapnel from a shotgun blast, the blobs may have been produced by the nova explosion, the subsequent expansion of gaseous debris, or collisions between fast-moving and slow- moving gas from several eruptions. False color has been applied to this image to enhance details in the blobs. The ground-based image was taken Jan. 19, 1995 by the European Southern Observatory's New Technology Telescope in La Silla, Chile. The Hubble telescope picture is a compilation of data taken on Feb. 26, 1994, and June 16, Oct. 7, and Nov. 10, 1995, by the Wide Field and Planetary Camera 2. T Pyxidis is 6,000 light-years away in the dim southern constellation Pyxis, the Mariner's Compass. Credits: Mike Shara, Bob Williams, and David Zurek (Space Telescope Science Institute); Roberto Gilmozzi (European Southern Observatory); Dina Prialnik (Tel Aviv University); and NASA.

  16. Photon Beaming in External Compton models

    NASA Astrophysics Data System (ADS)

    Hutter, Anne; Spanier, Felix

    In attempt to model blazar emission spectra, External Compton models have been employed to fit the observed data. In these models photons from the accretion disk or the CMB are upscat-tered via the Compton effect by the electrons and contribute to the emission. In previous works the resulting scattered photon angular distribution has been calculated for ultrarelativistic elec-trons. This work aims to extend the result to the case of mildly relativistic electrons. Hence, the beaming pattern produced by a relativistic moving blob consisting of isotropic distributed electrons, which scatter photons of an isotropic external radiation is calculated numerically. The isotropic photon density distribution in the blob frame is Lorentz-transformed into the rest frame of the electron and results in an anisotropic distribution with a preferred direction where it is upscattered by the electrons. The photon density distribution is determined and transformed back into the blob frame. As the photons in the rest frame of the electrons are dis-tributed anisotropically the scattering does not reproduce this anisotropic distribution. When transforming back into the blob frame the resulting photon distribution won't be isotropic. Approximations have shown that the resulting photon distribution is boosted more strongly than a distribution assumed to be isotropic in the rest frame of the electrons. Hence, in order to obtain the beaming caused by external Compton it is of particular interest to derive a more exact approximation of the resulting photon angular distribution.

  17. Application of optical broadband monitoring to quasi-rugate filters by ion-beam sputtering

    NASA Astrophysics Data System (ADS)

    Lappschies, Marc; Görtz, Björn; Ristau, Detlev

    2006-03-01

    Methods for the manufacture of rugate filters by the ion-beam-sputtering process are presented. The first approach gives an example of a digitized version of a continuous-layer notch filter. This method allows the comparison of the basic theory of interference coatings containing thin layers with practical results. For the other methods, a movable zone target is employed to fabricate graded and gradual rugate filters. The examples demonstrate the potential of broadband optical monitoring in conjunction with the ion-beam-sputtering process. First-characterization results indicate that these types of filter may exhibit higher laser-induced damage-threshold values than those of classical filters.

  18. Automated Threshold Selection for Template-Based Sonar Target Detection

    DTIC Science & Technology

    2017-08-01

    test based on the distribution of the matched filter correlations. From the matched filter output we evaluate target sized areas and surrounding...synthetic aperture sonar data that were part of the evaluation . Figure 3 shows a nearly uniform seafloor. Figure 4 is more complex, with

  19. Binocular contrast-gain control for natural scenes: Image structure and phase alignment.

    PubMed

    Huang, Pi-Chun; Dai, Yu-Ming

    2018-05-01

    In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  1. Wavelet-Based Blind Superresolution from Video Sequence and in MRI

    DTIC Science & Technology

    2005-12-31

    in Fig. 4(e) and (f), respectively. The PSNR- based optimal threshold gives better noise filtering but poor deblurring [see Fig. 4(c) and (e)] while...that ultimately produces the deblurred , noise filtered, superresolved image. Finite support linear shift invariant blurs are reasonable to assume... Deblurred and Noise Filtered HR Image Cameras with different PSFs Figure 1: Multichannel Blind Superresolution Model condition [11] on the zeros of the

  2. Underwater Intruder Detection Sonar for Harbour Protection: State of the Art Review and Implications

    DTIC Science & Technology

    2006-10-01

    intruder would appear as a small moving “ blob ” of energetic echo in the echograph, and the operator could judge whether the contact is a threat that calls...visually then as a small fluctuating “ blob ” against a fluctuating background of sound clutter and reverberation, making it difficult to visually...4. Non-random false alarms caused by genuine underwater contacts that happened not to be intruders—by large fish , or schools of fish , or marine

  3. Towards Understanding the Role of Colour Information in Scene Perception using Night Vision Device

    DTIC Science & Technology

    2009-06-01

    possessing a visual system much simplified from that of living birds, reptiles, and teleost (bony) fish , which are generally tetrachromatic (Bowmaker...Levkowitz and Herman (1992) speculated that the results might be limited to “ blob ” detection. A possible mediating factor may have been the size and...sharpness of the “ blobs ” used in their task. Mullen (1985) showed that the visual system is much more sensitive to the 7 DSTO-RR-0345 high spatial

  4. Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information

    NASA Astrophysics Data System (ADS)

    Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.

    2017-09-01

    Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  5. Electronic device increases threshold sensitivity and removes noise from FM communications receiver

    NASA Technical Reports Server (NTRS)

    Conrad, W. M.; Loch, F. J.

    1971-01-01

    Threshold extension device connected between demodulator output and filter output minimizes clicking noise. Device consists of click-eliminating signal transfer channel with follow-and-hold circuit and detector for sensing click impulses. Final output consists of signal plus low level noise without high amplitude impulses.

  6. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  7. Estimate of the neutron fields in ATLAS based on ATLAS-MPX detectors data

    NASA Astrophysics Data System (ADS)

    Bouchami, J.; Dallaire, F.; Gutiérrez, A.; Idarraga, J.; Král, V.; Leroy, C.; Picard, S.; Pospíšil, S.; Scallon, O.; Solc, J.; Suk, M.; Turecek, D.; Vykydal, Z.; Žemlièka, J.

    2011-01-01

    The ATLAS-MPX detectors are based on Medipix2 silicon devices designed by CERN for the detection of different types of radiation. These detectors are covered with converting layers of 6LiF and polyethylene (PE) to increase their sensitivity to thermal and fast neutrons, respectively. These devices allow the measurement of the composition and spectroscopic characteristics of the radiation field in ATLAS, particularly of neutrons. These detectors can operate in low or high preset energy threshold mode. The signature of particles interacting in a ATLAS-MPX detector at low threshold are clusters of adjacent pixels with different size and form depending on their type, energy and incidence angle. The classification of particles into different categories can be done using the geometrical parameters of these clusters. The Medipix analysis framework (MAFalda) — based on the ROOT application — allows the recognition of particle tracks left in ATLAS-MPX devices located at various positions in the ATLAS detector and cavern. The pattern recognition obtained from the application of MAFalda was configured to distinguish the response of neutrons from other radiation. The neutron response at low threshold is characterized by clusters of adjoining pixels (heavy tracks and heavy blobs) left by protons and heavy ions resulting from neutron interactions in the converting layers of the ATLAS-MPX devices. The neutron detection efficiency of ATLAS-MPX devices has been determined by the exposure of two detectors of reference to radionuclide sources of neutrons (252Cf and 241AmBe). With these results, an estimate of the neutrons fields produced at the devices locations during ATLAS operation was done.

  8. 2D scrape-off layer turbulence measurement using Deuterium beam emission spectroscopy on KSTAR

    NASA Astrophysics Data System (ADS)

    Lampert, M.; Zoletnik, S.; Bak, J. G.; Nam, Y. U.; Kstar Team

    2018-04-01

    Intermittent events in the scrape-off layer (SOL) of magnetically confined plasmas, often called blobs and holes, contribute significantly to the particle and heat loss across the magnetic field lines. In this article, the results of the scrape-off layer and edge turbulence measurements are presented with the two-dimensional Deuterium Beam Emission Spectroscopy system (DBES) at KSTAR (Korea Superconducting Tokamak Advanced Research). The properties of blobs and holes are determined in an L-mode and an H-mode shot with statistical tools and conditional averaging. These results show the capabilities and limitations of the SOL turbulence measurement of a 2D BES system. The results from the BES study were compared with the analysis of probe measurements. It was found that while probes offer a better signal-to-noise ratio and can measure blobs down to 3 mm size, BES can monitor the two-dimensional dynamics of larger events continuously during full discharges, and the measurement is not limited to the SOL on KSTAR.

  9. Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adal, Kedir M.; Sidebe, Desire; Ali, Sharib

    2014-01-07

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using onlymore » few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.« less

  10. A New Approach to X-ray Analysis of SNRs

    NASA Astrophysics Data System (ADS)

    Frank, Kari A.; Burrows, David; Dwarkadas, Vikram

    2016-06-01

    We present preliminary results of applying a novel analysis method, Smoothed Particle Inference (SPI), to XMM-Newton observations of SNR RCW 103 and Tycho. SPI is a Bayesian modeling process that fits a population of gas blobs (”smoothed particles”) such that their superposed emission reproduces the observed spatial and spectral distribution of photons. Emission-weighted distributions of plasma properties, such as abundances and temperatures, are then extracted from the properties of the individual blobs. This technique has important advantages over analysis techniques which implicitly assume that remnants are two-dimensional objects in which each line of sight encompasses a single plasma. By contrast, SPI allows superposition of as many blobs of plasma as are needed to match the spectrum observed in each direction, without the need to bin the data spatially. The analyses of RCW 103 and Tycho are part of a pilot study for the larger SPIES (Smoothed Particle Inference Exploration of SNRs) project, in which SPI will be applied to a sample of 12 bright SNRs.

  11. Smoothed Particle Inference Analysis of SNR RCW 103

    NASA Astrophysics Data System (ADS)

    Frank, Kari A.; Burrows, David N.; Dwarkadas, Vikram

    2016-04-01

    We present preliminary results of applying a novel analysis method, Smoothed Particle Inference (SPI), to an XMM-Newton observation of SNR RCW 103. SPI is a Bayesian modeling process that fits a population of gas blobs ("smoothed particles") such that their superposed emission reproduces the observed spatial and spectral distribution of photons. Emission-weighted distributions of plasma properties, such as abundances and temperatures, are then extracted from the properties of the individual blobs. This technique has important advantages over analysis techniques which implicitly assume that remnants are two-dimensional objects in which each line of sight encompasses a single plasma. By contrast, SPI allows superposition of as many blobs of plasma as are needed to match the spectrum observed in each direction, without the need to bin the data spatially. This RCW 103 analysis is part of a pilot study for the larger SPIES (Smoothed Particle Inference Exploration of SNRs) project, in which SPI will be applied to a sample of 12 bright SNRs.

  12. Electrostatic effects on hyaluronic acid configuration

    NASA Astrophysics Data System (ADS)

    Berezney, John; Saleh, Omar

    2015-03-01

    In systems of polyelectrolytes, such as solutions of charged biopolymers, the electrostatic repulsion between charged monomers plays a dominant role in determining the molecular conformation. Altering the ionic strength of the solvent thus affects the structure of such a polymer. Capturing this electrostatically-driven structural dependence is important for understanding many biological systems. Here, we use single molecule manipulation experiments to collect force-extension behavior on hyaluronic acid (HA), a polyanion which is a major component of the extracellular matrix in all vertebrates. By measuring HA elasticity in a variety of salt conditions, we are able to directly assess the contribution of electrostatics to the chain's self-avoidance and local stiffness. Similar to recent results from our group on single-stranded nucleic acids, our data indicate that HA behaves as a swollen chain of electrostatic blobs, with blob size proportional to the solution Debye length. Our data indicate that the chain structure within the blob is not worm-like, likely due to long-range electrostatic interactions. We discuss potential models of this effect.

  13. Semiautomated landscape feature extraction and modeling

    NASA Astrophysics Data System (ADS)

    Wasilewski, Anthony A.; Faust, Nickolas L.; Ribarsky, William

    2001-08-01

    We have developed a semi-automated procedure for generating correctly located 3D tree objects form overhead imagery. Cross-platform software partitions arbitrarily large, geocorrected and geolocated imagery into management sub- images. The user manually selected tree areas from one or more of these sub-images. Tree group blobs are then narrowed to lines using a special thinning algorithm which retains the topology of the blobs, and also stores the thickness of the parent blob. Maxima along these thinned tree grous are found, and used as individual tree locations within the tree group. Magnitudes of the local maxima are used to scale the radii of the tree objects. Grossly overlapping trees are culled based on a comparison of tree-tree distance to combined radii. Tree color is randomly selected based on the distribution of sample tree pixels, and height is estimated form tree radius. The final tree objects are then inserted into a terrain database which can be navigated by VGIS, a high-resolution global terrain visualization system developed at Georgia Tech.

  14. Modeling of blob-hole correlations in GPI edge turbulence data

    NASA Astrophysics Data System (ADS)

    Myra, J. R.; Russell, D. A.; Zweben, S. J.

    2017-10-01

    Gas-puff imaging (GPI) observations made on NSTX have revealed two-point spatial correlation patterns in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this work, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlation patterns that are qualitatively similar to the GPI data in many respects. Effects of the reference location (confined surfaces or scrape-off layer), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored. The possibility of using the model to ascertain new information about edge turbulence is discussed. Work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG02-02ER54678.

  15. A preliminary evaluation of a failure detection filter for detecting and identifying control element failures in a transport aircraft

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1985-01-01

    The application of the failure detection filter to the detection and identification of aircraft control element failures was evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 Aircraft. Simulation results show that with a simple correlator and threshold detector used to process the filter residuals, the failure detection performance is seriously degraded by the effects of turbulence.

  16. An Automated Energy Detection Algorithm Based on Consecutive Mean Excision

    DTIC Science & Technology

    2018-01-01

    present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan

  17. Blob-hole correlation model for edge turbulence and comparisons with NSTX gas puff imaging data

    NASA Astrophysics Data System (ADS)

    Myra, J. R.; Zweben, S. J.; Russell, D. A.

    2018-07-01

    Gas puff imaging (GPI) observations made in NSTX (Zweben et al 2017 Phys. Plasmas 24 102509) have revealed two-point spatial correlations of edge and scrape-off layer (SOL) turbulence in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this paper, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlation patterns that are qualitatively similar to the GPI data in several respects. Effects of the reference location (confined surfaces or SOL), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored and discussed with respect to experimental observations. Additional analysis of the experimental GPI dataset is performed to further test this blob-hole correlation model. A time delay two-point spatial correlation study did not reveal inward propagation of the negative correlation structures that were postulated to correspond to holes in the data nor did it suggest that the negative correlation structures are due to neutral shadowing. However, tracking of the highest and lowest values (extrema) of the normalized GPI fluctuations shows strong evidence for mean inward propagation of minima and outward propagation of maxima, in qualitative agreement with theoretical expectations. Other properties of the experimentally observed extrema are discussed.

  18. On the relationship between kinetic and fluid formalisms for convection in the inner magnetosphere

    NASA Astrophysics Data System (ADS)

    Song, Yang; Sazykin, Stanislav; Wolf, Richard A.

    2008-08-01

    In the inner magnetosphere, the plasma flows are mostly slow compared to thermal or Alfvén speeds, but the convection is far away from the ideal magnetohydrodynamic regime since the gradient/curvature drifts become significant. Both kinetic (Wolf, 1983) and two-fluid (Peymirat and Fontaine, 1994; Heinemann, 1999) formalisms have been used to describe plasma dynamics, but it is not fully understood how they relate to each other. We explore the relations among kinetic, fluid, and recently developed "average" (Liu, 2006) models in an attempt to find the simplest yet realistic way to describe the convection. First, we prove analytically that the model of (Liu, 2006), when closed with the assumption of a Maxwellian distribution, is equivalent to the fluid model of (Heinemann, 1999). Second, we analyze the transport of both one-dimensional and two-dimensional Gaussian-shaped blob of hot plasma. For the kinetic case, it is known that the time evolution of such a blob is gradual spreading in time. For the fluid case, Heinemann and Wolf (2001a, 2001b) showed that in a one-dimensional idealized case, the blob separates into two drifting at different speeds. We present a fully nonlinear solution of this case, confirming this behavior but demonstrating what appears to be a shocklike steepening of the faster drifting secondary blob. A new, more realistic two-dimensional example using the dipole geometry with a uniform electric field confirms the one-dimensional solutions. Implications for the numerical simulations of magnetospheric dynamics are discussed.

  19. A generalized adaptive mathematical morphological filter for LIDAR data

    NASA Astrophysics Data System (ADS)

    Cui, Zheng

    Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.

  20. 56Fe capture cross section experiments at the RPI LINAC Center

    NASA Astrophysics Data System (ADS)

    McDermott, Brian; Blain, Ezekiel; Thompson, Nicholas; Weltz, Adam; Youmans, Amanda; Danon, Yaron; Barry, Devin; Block, Robert; Daskalakis, Adam; Epping, Brian; Leinweber, Gregory; Rapp, Michael

    2017-09-01

    A new array of C6D6 detectors installed at the RPI LINAC Center has enabled the capability to measure neutron capture cross sections above the 847 keV inelastic scattering threshold of 56Fe through the use of digital post-processing filters and pulse-integral discriminators, without sacrificing the statistical quality of data at lower incident neutron energies where such filtering is unnecessary. The C6D6 detectors were used to perform time-of-flight capture cross section measurements on a sample 99.87% enriched iron-56. The total-energy method, combined with the pulse height weighting technique, were then applied to the raw data to determine the energy-dependent capture yield. Above the inelastic threshold, the data were analyzed with a pulse-integral filter to reveal the capture signal, extending the the full data set to 2 MeV.

  1. Establishing the Response of Low Frequency Auditory Filters

    NASA Technical Reports Server (NTRS)

    Rafaelof, Menachem; Christian, Andrew; Shepherd, Kevin; Rizzi, Stephen; Stephenson, James

    2017-01-01

    The response of auditory filters is central to frequency selectivity of sound by the human auditory system. This is true especially for realistic complex sounds that are often encountered in many applications such as modeling the audibility of sound, voice recognition, noise cancelation, and the development of advanced hearing aid devices. The purpose of this study was to establish the response of low frequency (below 100Hz) auditory filters. Two experiments were designed and executed; the first was to measure subject's hearing threshold for pure tones (at 25, 31.5, 40, 50, 63 and 80 Hz), and the second was to measure the Psychophysical Tuning Curves (PTCs) at two signal frequencies (Fs= 40 and 63Hz). Experiment 1 involved 36 subjects while experiment 2 used 20 subjects selected from experiment 1. Both experiments were based on a 3-down 1-up 3AFC adaptive staircase test procedure using either a variable level narrow-band noise masker or a tone. A summary of the results includes masked threshold data in form of PTCs, the response of auditory filters, their distribution, and comparison with similar recently published data.

  2. Detection and segmentation of multiple touching product inspection items

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David

    1996-12-01

    X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.

  3. Performance enhancement of various real-time image processing techniques via speculative execution

    NASA Astrophysics Data System (ADS)

    Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.

    1996-03-01

    In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.

  4. Automatic abdominal lymph node detection method based on local intensity structure analysis from 3D x-ray CT images

    NASA Astrophysics Data System (ADS)

    Nakamura, Yoshihiko; Nimura, Yukitaka; Kitasaka, Takayuki; Mizuno, Shinji; Furukawa, Kazuhiro; Goto, Hidemi; Fujiwara, Michitaka; Misawa, Kazunari; Ito, Masaaki; Nawano, Shigeru; Mori, Kensaku

    2013-03-01

    This paper presents an automated method of abdominal lymph node detection to aid the preoperative diagnosis of abdominal cancer surgery. In abdominal cancer surgery, surgeons must resect not only tumors and metastases but also lymph nodes that might have a metastasis. This procedure is called lymphadenectomy or lymph node dissection. Insufficient lymphadenectomy carries a high risk for relapse. However, excessive resection decreases a patient's quality of life. Therefore, it is important to identify the location and the structure of lymph nodes to make a suitable surgical plan. The proposed method consists of candidate lymph node detection and false positive reduction. Candidate lymph nodes are detected using a multi-scale blob-like enhancement filter based on local intensity structure analysis. To reduce false positives, the proposed method uses a classifier based on support vector machine with the texture and shape information. The experimental results reveal that it detects 70.5% of the lymph nodes with 13.0 false positives per case.

  5. Coloured Filters Improve Exclusion of Perceptual Noise in Visually Symptomatic Dyslexics

    ERIC Educational Resources Information Center

    Northway, Nadia; Manahilov, Velitchko; Simpson, William

    2010-01-01

    Previous studies of visually symptomatic dyslexics have found that their contrast thresholds for pattern discrimination are the same as non-dyslexics. However, when noise is added to the stimuli, contrast thresholds rise markedly in dyslexics compared with non-dyslexics. This result could be due to impaired noise exclusion in dyslexics. Some…

  6. Use of change-point detection for friction-velocity threshold evaluation in eddy-covariance studies

    Treesearch

    A.G. Barr; A.D. Richardson; D.Y. Hollinger; D. Papale; M.A. Arain; T.A. Black; G. Bohrer; D. Dragoni; M.L. Fischer; L. Gu; B.E. Law; H.A. Margolis; J.H. McCaughey; J.W. Munger; W. Oechel; K. Schaeffer

    2013-01-01

    The eddy-covariance method often underestimates fluxes under stable, low-wind conditions at night when turbulence is not well developed. The most common approach to resolve the problem of nighttime flux underestimation is to identify and remove the deficit periods using friction-velocity (u∗) threshold filters (u∗

  7. Frequency modulation television analysis: Threshold impulse analysis. [with computer program

    NASA Technical Reports Server (NTRS)

    Hodge, W. H.

    1973-01-01

    A computer program is developed to calculate the FM threshold impulse rates as a function of the carrier-to-noise ratio for a specified FM system. The system parameters and a vector of 1024 integers, representing the probability density of the modulating voltage, are required as input parameters. The computer program is utilized to calculate threshold impulse rates for twenty-four sets of measured probability data supplied by NASA and for sinusoidal and Gaussian modulating waveforms. As a result of the analysis several conclusions are drawn: (1) The use of preemphasis in an FM television system improves the threshold by reducing the impulse rate. (2) Sinusoidal modulation produces a total impulse rate which is a practical upper bound for the impulse rates of TV signals providing the same peak deviations. (3) As the moment of the FM spectrum about the center frequency of the predetection filter increases, the impulse rate tends to increase. (4) A spectrum having an expected frequency above (below) the center frequency of the predetection filter produces a higher negative (positive) than positive (negative) impulse rate.

  8. Automatic laser beam alignment using blob detection for an environment monitoring spectroscopy

    NASA Astrophysics Data System (ADS)

    Khidir, Jarjees; Chen, Youhua; Anderson, Gary

    2013-05-01

    This paper describes a fully automated system to align an infra-red laser beam with a small retro-reflector over a wide range of distances. The component development and test were especially used for an open-path spectrometer gas detection system. Using blob detection under OpenCV library, an automatic alignment algorithm was designed to achieve fast and accurate target detection in a complex background environment. Test results are presented to show that the proposed algorithm has been successfully applied to various target distances and environment conditions.

  9. A Topological Criterion for Filtering Information in Complex Brain Networks

    PubMed Central

    Latora, Vito; Chavez, Mario

    2017-01-01

    In many biological systems, the network of interactions between the elements can only be inferred from experimental measurements. In neuroscience, non-invasive imaging tools are extensively used to derive either structural or functional brain networks in-vivo. As a result of the inference process, we obtain a matrix of values corresponding to a fully connected and weighted network. To turn this into a useful sparse network, thresholding is typically adopted to cancel a percentage of the weakest connections. The structural properties of the resulting network depend on how much of the inferred connectivity is eventually retained. However, how to objectively fix this threshold is still an open issue. We introduce a criterion, the efficiency cost optimization (ECO), to select a threshold based on the optimization of the trade-off between the efficiency of a network and its wiring cost. We prove analytically and we confirm through numerical simulations that the connection density maximizing this trade-off emphasizes the intrinsic properties of a given network, while preserving its sparsity. Moreover, this density threshold can be determined a-priori, since the number of connections to filter only depends on the network size according to a power-law. We validate this result on several brain networks, from micro- to macro-scales, obtained with different imaging modalities. Finally, we test the potential of ECO in discriminating brain states with respect to alternative filtering methods. ECO advances our ability to analyze and compare biological networks, inferred from experimental data, in a fast and principled way. PMID:28076353

  10. Chaos-based wireless communication resisting multipath effects.

    PubMed

    Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso

    2017-09-01

    In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.

  11. Chaos-based wireless communication resisting multipath effects

    NASA Astrophysics Data System (ADS)

    Yao, Jun-Liang; Li, Chen; Ren, Hai-Peng; Grebogi, Celso

    2017-09-01

    In additive white Gaussian noise channel, chaos has been shown to be the optimal coherent communication waveform in the sense of using a very simple matched filter to maximize the signal-to-noise ratio. Recently, Lyapunov exponent spectrum of the chaotic signals after being transmitted through a wireless channel has been shown to be unaltered, paving the way for wireless communication using chaos. In wireless communication systems, inter-symbol interference caused by multipath propagation is one of the main obstacles to achieve high bit transmission rate and low bit-error rate (BER). How to resist the multipath effect is a fundamental problem in a chaos-based wireless communication system (CWCS). In this paper, a CWCS is built to transmit chaotic signals generated by a hybrid dynamical system and then to filter the received signals by using the corresponding matched filter to decrease the noise effect and to detect the binary information. We find that the multipath effect can be effectively resisted by regrouping the return map of the received signal and by setting the corresponding threshold based on the available information. We show that the optimal threshold is a function of the channel parameters and of the information symbols. Practically, the channel parameters are time-variant, and the future information symbols are unavailable. In this case, a suboptimal threshold is proposed, and the BER using the suboptimal threshold is derived analytically. Simulation results show that the CWCS achieves a remarkable competitive performance even under inaccurate channel parameters.

  12. Towards a bias-free filter routine to determine precipitation and evapotranspiration from high precision lysimeter measurements

    NASA Astrophysics Data System (ADS)

    Peters, Ande; Durner, Wolfgang; Schrader, Frederik; Groh, Jannis; Pütz, Thomas

    2017-04-01

    Weighing lysimeters are known to be the best means for a precise and unbiased measurement of water fluxes at the interface between the soil-plant system and the atmosphere. The measured data need to be filtered to separate evapotranspiration (ET) and precipitation (P) from noise. Such filter routines apply typically two steps: (i) a low pass filter, like moving average, which is used to smooth noisy data, and (ii) a threshold filter to separate significant from insignificant mass changes. Recent developments of these filters have revealed and solved many problems regarding bias in the data processing. A remaining problem is that each change in flow direction is accompanied with a systematic flow underestimation due to the threshold scheme. In this contribution we show and analyze this systematic effect and propose a heuristic solution by introducing a so-called snap routine. The routine is calibrated and tested with synthetic flux data and applied to real data from a precision lysimeter for a 10-month period. We show that the absolute systematic effect is independent of the magnitude of a certain flux event. Thus, for small events, like dew or rime formation, the relative error is highest and can be in the same order of magnitude as the flux itself. The heuristic snap routine effectively overcomes these problems and yields an almost unbiased representation of the real signal.

  13. Accuracy of iodine removal using dual-energy CT with or without a tin filter: an experimental phantom study.

    PubMed

    Kawai, Tatsuya; Takeuchi, Mitsuru; Hara, Masaki; Ohashi, Kazuya; Suzuki, Hirochika; Yamada, Kiyotaka; Sugimura, Yuya; Shibamoto, Yuta

    2013-10-01

    The effects of a tin filter on virtual non-enhanced (VNE) images created by dual-energy CT have not been well evaluated. To compare the accuracy of VNE images between those with and without a tin filter. Two different types of columnar phantoms made of agarose gel were evaluated. Phantom A contained various concentrations of iodine (4.5-1590 HU at 120 kVp). Phantom B consisted of a central component (0, 10, 25, and 40 mgI/cm(3)) and a surrounding component (0, 50, 100, and 200 mgI/cm(3)) with variable iodine concentration. They were scanned by dual-source CT in conventional single-energy mode and dual-energy mode with and without a tin filter. CT values on each gel at the corresponding points were measured and the accuracy of iodine removal was evaluated. On VNE images, the CT number of the gel of Phantom A fell within the range between -15 and +15 HU under 626 and 881 HU at single-energy 120 kVp with and without a tin filter, respectively. With attenuation over these thresholds, iodine concentration of gels was underestimated with the tin filter but overestimated without it. For Phantom B, the mean CT numbers on VNE images in the central gel component surrounded by the gel with iodine concentrations of 0, 50, 100, and 200 mgI/cm(3) were in the range of -19-+6 HU and 21-100 HU with and without the tin filter, respectively. Both with and without a tin filter, iodine removal was accurate under a threshold of iodine concentration. Although a surrounding structure with higher attenuation decreased the accuracy, a tin filter improved the margin of error.

  14. A simple filter circuit for denoising biomechanical impact signals.

    PubMed

    Subramaniam, Suba R; Georgakis, Apostolos

    2009-01-01

    We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.

  15. Tracks detection from high-orbit space objects

    NASA Astrophysics Data System (ADS)

    Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.

    2017-05-01

    The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.

  16. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    PubMed

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  17. Measuring Waves and Erosion in Underwater Oil Blobs and Monitoring Other Arbitrary Surfaces with a Kinect v2 Time-of-Flight Camera

    NASA Astrophysics Data System (ADS)

    Butkiewicz, T.

    2014-12-01

    We developed free software that enables researchers to utilize Microsoft's new Kinect for Windows v2 sensor for a range of coastal and ocean mapping applications, as well as monitoring and measuring experimental scenes. While the original Kinect device used structured light and had very poor resolution, many geophysical researchers found uses for it in their experiments. The new next generation of this sensor uses time-of-flight technology, and can produce higher resolution depth measurements with an order of magnitude more accuracy. It also is capable of measurement through and under water. An analysis tool in our application lets users quickly select any arbitrary surface in the sensor's view. The tools automatically scans the surface, then calibrates and aligns a measurement volume to it. Depth readings from the sensor are converted into 3D point clouds, and points falling within this volume are projected into surface coordinates. Raster images can be output which consist of height fields aligned to the surface, generated from these projected measurements and interpolations between them. Images have a simple 1 pixel = 1 mm resolution and intensity values representing mm in height from the base-plane, which enables easy measurement and calculations to be conducted on the images in other analysis packages. Single snapshots can be taken manually on demand, or the software can monitor the surface automatically, capturing frames at preset intervals. This produces time lapse animations of dynamically changing surfaces. We apply this analysis tool to an experiment studying the behavior of underwater oil in response to flowing water of different speeds and temperatures. Blobs of viscous oils are placed in a flume apparatus, which circulates water past them. Over the course of a couple hours, the oil blobs spread out, waves slowly ripple across their surfaces, and erosions occur as smaller blobs break off from the main blob. All of this can be captured in 3D, with mm accuracy, through the water using the Kinect for Windows v2 sensor and our K2MapKit software.

  18. A Three-Dimensional Pore-Scale Model for Non-Wetting Phase Mobilization with Ferrofluid

    NASA Astrophysics Data System (ADS)

    Wang, N.; Prodanovic, M.

    2017-12-01

    Ferrofluid, a stable dispersion of paramagnetic nanoparticles in water, can generate a distributed pressure difference across the phase interface in an immiscible two-phase flow under an external magnetic field. In water-wet porous media, this non-uniform pressure difference may be used to mobilize the non-wetting phase, e.g. oil, trapped in the pores. Previous numerical work by Soares et al. of two-dimensional single-pore model showed enhanced non-wetting phase recovery with water-based ferrofluid under certain magnetic field directions and decreased recovery under other directions. However, the magnetic field selectively concentrates in the high magnetic permeability ferrofluid which fills the small corners between the non-wetting phase and the solid wall. The magnetic field induced pressure is proportional to the square of local magnetic field strength and its normal component, and makes a significant impact on the non-wetting phase deformation. The two-dimensional model omitted the effect of most of these corners and is not sufficient to compute the magnetic-field-induced pressure difference or to predict the non-wetting blob deformation. Further, it is not clear that 3D effects on magnetic field in an irregular geometry can be approximated in 2D. We present a three-dimensional immiscible two-phase flow model to simulate the deformation of a non-wetting liquid blob in a single pore filled with a ferrofluid under a uniform external magnetic field. The ferrofluid is modeled as a uniform single phase because the nanoparticles are 104 times smaller than the pore. The open source CFD solver library OpenFOAM is used for the simulations based on the volume of fluid method. Simulations are performed in a converging-diverging channel model on different magnetic field direction, different initial oil saturations, and different pore shapes. Results indicate that the external magnetic field always stretches the non-wetting blob away from the solid channel wall. A magnetic field transverse to the channel direction may likely provide the best elongation along the channel direction for the non-wetting blob. The pore-throat size ratio has an impact on the deformation of the non-wetting blob.

  19. Hydrodynamically Coupled Brownian Dynamics: A coarse-grain particle-based Brownian dynamics technique with hydrodynamic interactions for modeling self-developing flow of polymer solutions

    NASA Astrophysics Data System (ADS)

    Ahuja, V. R.; van der Gucht, J.; Briels, W. J.

    2018-01-01

    We present a novel coarse-grain particle-based simulation technique for modeling self-developing flow of dilute and semi-dilute polymer solutions. The central idea in this paper is the two-way coupling between a mesoscopic polymer model and a phenomenological fluid model. As our polymer model, we choose Responsive Particle Dynamics (RaPiD), a Brownian dynamics method, which formulates the so-called "conservative" and "transient" pair-potentials through which the polymers interact besides experiencing random forces in accordance with the fluctuation dissipation theorem. In addition to these interactions, our polymer blobs are also influenced by the background solvent velocity field, which we calculate by solving the Navier-Stokes equation discretized on a moving grid of fluid blobs using the Smoothed Particle Hydrodynamics (SPH) technique. While the polymers experience this frictional force opposing their motion relative to the background flow field, our fluid blobs also in turn are influenced by the motion of the polymers through an interaction term. This makes our technique a two-way coupling algorithm. We have constructed this interaction term in such a way that momentum is conserved locally, thereby preserving long range hydrodynamics. Furthermore, we have derived pairwise fluctuation terms for the velocities of the fluid blobs using the Fokker-Planck equation, which have been alternatively derived using the General Equation for the Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) approach in Smoothed Dissipative Particle Dynamics (SDPD) literature. These velocity fluctuations for the fluid may be incorporated into the velocity updates for our fluid blobs to obtain a thermodynamically consistent distribution of velocities. In cases where these fluctuations are insignificant, however, these additional terms may well be dropped out as they are in a standard SPH simulation. We have applied our technique to study the rheology of two different concentrations of our model linear polymer solutions. The results show that the polymers and the fluid are coupled very well with each other, showing no lag between their velocities. Furthermore, our results show non-Newtonian shear thinning and the characteristic flattening of the Poiseuille flow profile typically observed for polymer solutions.

  20. Hydrodynamically Coupled Brownian Dynamics: A coarse-grain particle-based Brownian dynamics technique with hydrodynamic interactions for modeling self-developing flow of polymer solutions.

    PubMed

    Ahuja, V R; van der Gucht, J; Briels, W J

    2018-01-21

    We present a novel coarse-grain particle-based simulation technique for modeling self-developing flow of dilute and semi-dilute polymer solutions. The central idea in this paper is the two-way coupling between a mesoscopic polymer model and a phenomenological fluid model. As our polymer model, we choose Responsive Particle Dynamics (RaPiD), a Brownian dynamics method, which formulates the so-called "conservative" and "transient" pair-potentials through which the polymers interact besides experiencing random forces in accordance with the fluctuation dissipation theorem. In addition to these interactions, our polymer blobs are also influenced by the background solvent velocity field, which we calculate by solving the Navier-Stokes equation discretized on a moving grid of fluid blobs using the Smoothed Particle Hydrodynamics (SPH) technique. While the polymers experience this frictional force opposing their motion relative to the background flow field, our fluid blobs also in turn are influenced by the motion of the polymers through an interaction term. This makes our technique a two-way coupling algorithm. We have constructed this interaction term in such a way that momentum is conserved locally, thereby preserving long range hydrodynamics. Furthermore, we have derived pairwise fluctuation terms for the velocities of the fluid blobs using the Fokker-Planck equation, which have been alternatively derived using the General Equation for the Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) approach in Smoothed Dissipative Particle Dynamics (SDPD) literature. These velocity fluctuations for the fluid may be incorporated into the velocity updates for our fluid blobs to obtain a thermodynamically consistent distribution of velocities. In cases where these fluctuations are insignificant, however, these additional terms may well be dropped out as they are in a standard SPH simulation. We have applied our technique to study the rheology of two different concentrations of our model linear polymer solutions. The results show that the polymers and the fluid are coupled very well with each other, showing no lag between their velocities. Furthermore, our results show non-Newtonian shear thinning and the characteristic flattening of the Poiseuille flow profile typically observed for polymer solutions.

  1. Pore-scale Investigation of Surfactant Induced Mobilization for the Remediation of LNAPL

    NASA Astrophysics Data System (ADS)

    Ghosh, J.; Tick, G. R.

    2011-12-01

    The presence of nonaqueous phase liquids within the subsurface can significantly limit the effectiveness of groundwater remediation. Specifically, light nonaqueous phase liquids (LNAPLs) present unique challenges as they can become "smeared" within zones above and below the water table. The aim of this research is to understand the interfacial phenomena at the pore scale influencing residual saturation of LNAPL distribution as function of media heterogeneity and remediation processes from various aquifer systems. A series of columns were packed with three types of unconsolidated sand of increasing heterogeneity in grain size distribution and were established with residual saturations of light and heavy crude oil fractions, respectively. These columns were then subjected to flooding with 0.1% anionic surfactant solution in various episodes to initiate mobilization and enhanced recovery of NAPL phase contamination. Synchrotron X-ray microtomography (SXM) imaging technology was used to study three-dimensional (3-D) distributions of crude-oil-blobs before and after sequential surfactant flooding events. Results showed that LNAPL blob distributions became more heterogeneous after each subsequent surfactant flooding episode for all porous-media systems. NAPL recovery was most effective from the homogenous porous medium whereby 100% recovery resulted after 5 pore volumes (PVs) of flushing. LNAPL within the mildly heterogeneous porous medium produced a limited but consistent reduction in saturation after each surfactant flooding episode (23% and 43% recovery for light and heavy after the 5-PV flood). The highly heterogeneous porous medium showed greater NAPL recovery potential (42% and 16% for light and heavy) only after multiple pore volumes of flushing, at which point the NAPL blobs become fragmented into the smaller fragments in response to the reduced interfacial tension. The heterogeneity of the porous media (i.e. grain-size distribution) was a dominant control on the NAPL-blob-size-distribution trapped as residual saturation. The mobility of the NAPL blobs, as a result of surfactant flooding, was primarily controlled by the relative permeability of the medium and the reduction of interfacial tension between the wetting phase (water) and NAPL phase.

  2. Numerical studies of the Kelvin-Hemholtz instability in a coronal jet

    NASA Astrophysics Data System (ADS)

    Zhao, Tian-Le; Ni, Lei; Lin, Jun; Ziegler, Udo

    2018-04-01

    Kelvin-Hemholtz (K-H) instability in a coronal EUV jet is studied via 2.5D MHD numerical simulations. The jet results from magnetic reconnection due to the interaction of the newly emerging magnetic field and the pre-existing magnetic field in the corona. Our results show that the Alfvén Mach number along the jet is about 5–14 just before the instability occurs, and it is even higher than 14 at some local areas. During the K-H instability process, several vortex-like plasma blobs with high temperature and high density appear along the jet, and magnetic fields have also been rolled up and the magnetic configuration including anti-parallel magnetic fields forms, which leads to magnetic reconnection at many X-points and current sheet fragments inside the vortex-like blob. After magnetic islands appear inside the main current sheet, the total kinetic energy of the reconnection outflows decreases, and cannot support the formation of the vortex-like blob along the jet any longer, then the K-H instability eventually disappears. We also present the results about how the guide field and flux emerging speed affect the K-H instability. We find that a strong guide field inhibits shock formation in the reconnecting upward outflow regions but helps secondary magnetic islands appear earlier in the main current sheet, and then apparently suppresses the K-H instability. As the speed of the emerging magnetic field decreases, the K-H instability appears later, the highest temperature inside the vortex blob gets lower and the vortex structure gets smaller.

  3. Blob-hole correlation model for edge turbulence and comparisons with NSTX gas puff imaging data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myra, J. R.; Zweben, S. J.; Russell, D. A.

    We report that gas puff imaging (GPI) observations made in NSTX [Zweben S J, et al., 2017 Phys. Plasmas 24 102509] have revealed two-point spatial correlations of edge and scrape-off layer turbulence in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this paper, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlationmore » patterns that are qualitatively similar to the GPI data in several respects. Effects of the reference location (confined surfaces or scrape-off layer), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored and discussed with respect to experimental observations. Additional analysis of the experimental GPI dataset is performed to further test this blob-hole correlation model. A time delay two-point spatial correlation study did not reveal inward propagation of the negative correlation structures that were postulated to correspond to holes in the data nor did it suggest that the negative correlation structures are due to neutral shadowing. However, tracking of the highest and lowest values (extrema) of the normalized GPI fluctuations shows strong evidence for mean inward propagation of minima and outward propagation of maxima, in qualitative agreement with theoretical expectations. Finally, other properties of the experimentally observed extrema are discussed.« less

  4. Blob-hole correlation model for edge turbulence and comparisons with NSTX gas puff imaging data

    DOE PAGES

    Myra, J. R.; Zweben, S. J.; Russell, D. A.

    2018-05-15

    We report that gas puff imaging (GPI) observations made in NSTX [Zweben S J, et al., 2017 Phys. Plasmas 24 102509] have revealed two-point spatial correlations of edge and scrape-off layer turbulence in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this paper, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlationmore » patterns that are qualitatively similar to the GPI data in several respects. Effects of the reference location (confined surfaces or scrape-off layer), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored and discussed with respect to experimental observations. Additional analysis of the experimental GPI dataset is performed to further test this blob-hole correlation model. A time delay two-point spatial correlation study did not reveal inward propagation of the negative correlation structures that were postulated to correspond to holes in the data nor did it suggest that the negative correlation structures are due to neutral shadowing. However, tracking of the highest and lowest values (extrema) of the normalized GPI fluctuations shows strong evidence for mean inward propagation of minima and outward propagation of maxima, in qualitative agreement with theoretical expectations. Finally, other properties of the experimentally observed extrema are discussed.« less

  5. Lymph node detection in IASLC-defined zones on PET/CT images

    NASA Astrophysics Data System (ADS)

    Song, Yihua; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2016-03-01

    Lymph node detection is challenging due to the low contrast between lymph nodes as well as surrounding soft tissues and the variation in nodal size and shape. In this paper, we propose several novel ideas which are combined into a system to operate on positron emission tomography/ computed tomography (PET/CT) images to detect abnormal thoracic nodes. First, our previous Automatic Anatomy Recognition (AAR) approach is modified where lymph node zones predominantly following International Association for the Study of Lung Cancer (IASLC) specifications are modeled as objects arranged in a hierarchy along with key anatomic anchor objects. This fuzzy anatomy model built from diagnostic CT images is then deployed on PET/CT images for automatically recognizing the zones. A novel globular filter (g-filter) to detect blob-like objects over a specified range of sizes is designed to detect the most likely locations and sizes of diseased nodes. Abnormal nodes within each automatically localized zone are subsequently detected via combined use of different items of information at various scales: lymph node zone model poses found at recognition indicating the geographic layout at the global level of node clusters, g-filter response which hones in on and carefully selects node-like globular objects at the node level, and CT and PET gray value but within only the most plausible nodal regions for node presence at the voxel level. The models are built from 25 diagnostic CT scans and refined for an object hierarchy based on a separate set of 20 diagnostic CT scans. Node detection is tested on an additional set of 20 PET/CT scans. Our preliminary results indicate node detection sensitivity and specificity at around 90% and 85%, respectively.

  6. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  7. Three-dimensional Diffusive Strip Method

    NASA Astrophysics Data System (ADS)

    Martinez-Ruiz, Daniel; Meunier, Patrice; Duchemin, Laurent; Villermaux, Emmanuel

    2016-11-01

    The Diffusive Strip Method (DSM) is a near-exact numerical method developed for mixing computations at large Péclet number in two-dimensions. The method consists in following stretched material lines to compute a-posteriori the resulting scalar field is extended here to three-dimensional flows, following surfaces. We describe its 3D peculiarities, and show how it applies to a simple Taylor-Couette configuration with non-rotating boundary conditions at the top end, bottom and outer cylinder. This flow produces an elaborate, although controlled, steady 3D flow which relies on the Ekman pumping arising from the rotation of the inner cylinder is both studied experimentally, and numerically modeled. A recurrent two-cells structure appears formed by stream tubes shaped as nested tori. A scalar blob in the flow experiences a Lagrangian oscillating dynamics with stretchings and compressions, driving the mixing process, and yielding both rapidly-mixed and nearly pure-diffusive regions. A triangulated-surface method is developed to calculate the blob elongation and scalar concentration PDFs through a single variable computation along the advected blob surface, capturing the rich evolution observed in the experiments.

  8. Solute transport along preferential flow paths in unsaturated fractures

    USGS Publications Warehouse

    Su, Grace W.; Geller, Jil T.; Pruess, Karsten; Hunt, James R.

    2001-01-01

    Laboratory experiments were conducted to study solute transport along preferential flow paths in unsaturated, inclined fractures. Qualitative aspects of solute transport were identified in a miscible dye tracer experiment conducted in a transparent replica of a natural granite fracture. Additional experiments were conducted to measure the breakthrough curves of a conservative tracer introduced into an established preferential flow path in two different fracture replicas and a rock‐replica combination. The influence of gravity was investigated by varying fracture inclination. The relationship between the travel times of the solute and the relative influence of gravity was substantially affected by two modes of intermittent flow that occurred: the snapping rivulet and the pulsating blob modes. The measured travel times of the solute were evaluated with three transfer function models: the axial dispersion, the reactors‐in‐series, and the lognormal models. The three models described the solute travel times nearly equally well. A mechanistic model was also formulated to describe transport when the pulsating blob mode occurred which assumed blobs of water containing solute mixed with residual pools of water along the flow path.

  9. Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning.

    PubMed

    Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice

    2014-04-01

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. SU-F-J-93: Automated Segmentation of High-Resolution 3D WholeBrain Spectroscopic MRI for Glioblastoma Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Shu, H; Cordova, J

    Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance of this new acquisition modality in clinical practice.« less

  11. Discrimination of curvature from motion during smooth pursuit eye movements and fixation.

    PubMed

    Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2017-09-01

    Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature. Copyright © 2017 the American Physiological Society.

  12. Wet particle source identification and reduction using a new filter cleaning process

    NASA Astrophysics Data System (ADS)

    Umeda, Toru; Morita, Akihiko; Shimizu, Hideki; Tsuzuki, Shuichi

    2014-03-01

    Wet particle reduction during filter installation and start-up aligns closely with initiatives to reduce both chemical consumption and preventative maintenance time. The present study focuses on the effects of filter materials cleanliness on wet particle defectivity through evaluation of filters that have been treated with a new enhanced cleaning process focused on organic compounds reduction. Little difference in filter performance is observed between the two filter types at a size detection threshold of 60 nm, while clear differences are observed at that of 26 nm. It can be suggested that organic compounds can be identified as a potential source of wet particles. Pall recommends filters that have been treated with the special cleaning process for applications with a critical defect size of less than 60 nm. Standard filter products are capable to satisfy wet particle defect performance criteria in less critical lithography applications.

  13. Resilient filtering for time-varying stochastic coupling networks under the event-triggering scheduling

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.

    2018-07-01

    The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.

  14. Real-Time Curvature Defect Detection on Outer Surfaces Using Best-Fit Polynomial Interpolation

    PubMed Central

    Golkar, Ehsan; Prabuwono, Anton Satria; Patel, Ahmed

    2012-01-01

    This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously. PMID:23202186

  15. Twisting Blob of Plasma

    NASA Image and Video Library

    2017-12-08

    A twisted blob of solar material – a hot, charged gas called plasma – can be seen erupting off the side of the sun on Sept. 26, 2014. The image is from NASA's Solar Dynamics Observatory, focusing in on ionized Helium at 60,000 degrees C. Credit: NASA/SDO NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  16. [Effect of transparent yellow and orange colored contact lenses on color discrimination in the yellow color range].

    PubMed

    Schürer, M; Walter, A; Brünner, H; Langenbucher, A

    2015-08-01

    Colored transparent filters cause a change in color perception and have an impact on the perceptible amount of different colors and especially on the ability to discriminate between them. Yellow or orange tinted contact lenses worn to enhance contrast vision by reducing or blocking short wavelengths also have an effect on color perception. The impact of the yellow and orange tinted contact lenses Wöhlk SPORT CONTRAST on color discrimination was investigated with the Erlangen colour measurement system in a study with 14 and 16 subjects, respectively. In relation to a yellow reference color located at u' = 0.2487/v' = 0.5433, measurements of color discrimination thresholds were taken in up to 6 different color coordinate axes. Based on these thresholds, color discrimination ellipses were calculated. These results are given in the Derrington, Krauskopf and Lennie (DKL) color system. Both contact lenses caused a shift of the reference color towards higher saturated colors. Color discrimination ability with the yellow and orange colored lenses was significantly enhanced along the blue-yellow axis in comparison to the reference measurements without a tinted filter. Along the red-green axis only the orange lens caused a significant reduction of color discrimination threshold distance to the reference color. Yellow and orange tinted contact lenses enhance the ability of color discrimination. If the transmission spectra and the induced changes are taken into account, these results can also be applied to other filter media, such as blue filter intraocular lenses.

  17. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  18. Towards an unbiased filter routine to determine precipitation and evapotranspiration from high precision lysimeter measurements

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Groh, Jannis; Schrader, Frederik; Durner, Wolfgang; Vereecken, Harry; Pütz, Thomas

    2017-06-01

    Weighing lysimeters are considered to be the best means for a precise measurement of water fluxes at the interface between the soil-plant system and the atmosphere. Any decrease of the net mass of the lysimeter can be interpreted as evapotranspiration (ET), any increase as precipitation (P). However, the measured raw data need to be filtered to separate real mass changes from noise. Such filter routines typically apply two steps: (i) a low pass filter, like moving average, which smooths noisy data, and (ii) a threshold filter that separates significant from insignificant mass changes. Recent developments of these filters have identified and solved some problems regarding bias in the data processing. A remaining problem is that each change in flow direction is accompanied with a systematic flow underestimation due to the threshold scheme. In this contribution, we analyze this systematic effect and show that the absolute underestimation is independent of the magnitude of a flux event. Thus, for small events, like dew or rime formation, the relative error is high and can reach the same magnitude as the flux itself. We develop a heuristic solution to the problem by introducing a so-called "snap routine". The routine is calibrated and tested with synthetic flux data and applied to real measurements obtained with a precision lysimeter for a 10-month period. The heuristic snap routine effectively overcomes these problems and yields an almost unbiased representation of the real signal.

  19. Stimulus and recording variables and their effects on mammalian vestibular evoked potentials

    NASA Technical Reports Server (NTRS)

    Jones, Sherri M.; Subramanian, Geetha; Avniel, Wilma; Guo, Yuqing; Burkard, Robert F.; Jones, Timothy A.

    2002-01-01

    Linear vestibular evoked potentials (VsEPs) measure the collective neural activity of the gravity receptor organs in the inner ear that respond to linear acceleration transients. The present study examined the effects of electrode placement, analog filtering, stimulus polarity and stimulus rate on linear VsEP thresholds, latencies and amplitudes recorded from mice. Two electrode-recording montages were evaluated, rostral (forebrain) to 'mastoid' and caudal (cerebellum) to 'mastoid'. VsEP thresholds and peak latencies were identical between the two recording sites; however, peak amplitudes were larger for the caudal recording montage. VsEPs were also affected by filtering. Results suggest optimum high pass filter cutoff at 100-300 Hz, and low pass filter cutoff at 10,000 Hz. To evaluate stimulus rate, linear jerk pulses were presented at 9.2, 16, 25, 40 and 80 Hz. At 80 Hz, mean latencies were longer (0.350-0.450 ms) and mean amplitudes reduced (0.8-1.8 microV) for all response peaks. In 50% of animals, late peaks (P3, N3) disappeared at 80 Hz. The results offer options for VsEP recording protocols. Copyright 2002 Elsevier Science B.V.

  20. Enforcing positivity in intrusive PC-UQ methods for reactive ODE systems

    DOE PAGES

    Najm, Habib N.; Valorani, Mauro

    2014-04-12

    We explore the relation between the development of a non-negligible probability of negative states and the instability of numerical integration of the intrusive Galerkin ordinary differential equation system describing uncertain chemical ignition. To prevent this instability without resorting to either multi-element local polynomial chaos (PC) methods or increasing the order of the PC representation in time, we propose a procedure aimed at modifying the amplitude of the PC modes to bring the probability of negative state values below a user-defined threshold. This modification can be effectively described as a filtering procedure of the spectral PC coefficients, which is applied on-the-flymore » during the numerical integration when the current value of the probability of negative states exceeds the prescribed threshold. We demonstrate the filtering procedure using a simple model of an ignition process in a batch reactor. This is carried out by comparing different observables and error measures as obtained by non-intrusive Monte Carlo and Gauss-quadrature integration and the filtered intrusive procedure. Lastly, the filtering procedure has been shown to effectively stabilize divergent intrusive solutions, and also to improve the accuracy of stable intrusive solutions which are close to the stability limits.« less

  1. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  2. The human as a detector of changes in variance and bandwidth

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Govindaraj, T.

    1977-01-01

    The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.

  3. Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme

    PubMed Central

    Li, Shanbin; Sauter, Dominique; Xu, Bugong

    2011-01-01

    In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590

  4. [Investigation of fast filter of ECG signals with lifting wavelet and smooth filter].

    PubMed

    Li, Xuefei; Mao, Yuxing; He, Wei; Yang, Fan; Zhou, Liang

    2008-02-01

    The lifting wavelet is used to decompose the original ECG signals and separate them into the approach signals with low frequency and the detail signals with high frequency, based on frequency characteristic. Parts of the detail signals are ignored according to the frequency characteristic. To avoid the distortion of QRS Complexes, the approach signals are filtered by an adaptive smooth filter with a proper threshold value. Through the inverse transform of the lifting wavelet, the reserved approach signals are reconstructed, and the three primary kinds of noise are limited effectively. In addition, the method is fast and there is no time delay between input and output.

  5. Laser designator protection filter for see-spot thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Donval, Ariela; Fisher, Tali; Lipman, Ofir; Oron, Moshe

    2012-06-01

    In some cases the FLIR has an open window in the 1.06 micrometer wavelength range; this capability is called 'see spot' and allows seeing a laser designator spot using the FLIR. A problem arises when the returned laser energy is too high for the camera sensitivity, and therefore can cause damage to the sensor. We propose a non-linear, solid-state dynamic filter solution protecting from damage in a passive way. Our filter blocks the transmission, only if the power exceeds a certain threshold as opposed to spectral filters that block a certain wavelength permanently. In this paper we introduce the Wideband Laser Protection Filter (WPF) solution for thermal imaging systems possessing the ability to see the laser spot.

  6. Dynamic Scaling Theory of the Forced Translocation of a Semi-flexible Polymer Through a Nanopore

    NASA Astrophysics Data System (ADS)

    Lam, Pui-Man; Zhen, Yi

    2015-10-01

    We present a theoretical description of the dynamics of a semi-flexible polymer being pulled through a nanopore by an external force acting at the pore. Our theory is based on the tensile blob picture of Pincus in which the front of the tensile force propagates through the backbone of the polymer, as suggested by Sakaue and recently applied to study a completely flexible polymer with self-avoidance, by Dubbledam et al. For a semi-flexible polymer with a persistence length P, its statistics is self-avoiding for a very long chain. As the local force increases, the blob size starts to decrease. At the blob size , where a is the size of a monomer, the statistics becomes that of an ideal chain. As the blob size further decreases to below the persistence length P, the statistics is that of a rigid rod. We argue that semi-flexible polymer in translocation should include the three regions: a self-avoiding region, an ideal chain region and a rigid rod region, under uneven tension propagation, instead of a uniform scaling picture as in the case of a completely flexible polymer. In various regimes under the effect of weak, intermediate and strong driving forces we derive equations from which we can calculate the translocation time of the polymer. The translocation exponent is given by , where is an effective exponent for the end-to-end distance of the semi-flexible polymer, having a value between 1/2 and 3/5, depending on the total contour length of the polymer. Our results are of relevance for forced translocation of biological polymers such as DNA through a nanopore.

  7. An hourglass model for the flare of HST-1 in M87

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wen-Po; Zhao, Guang-Yao; Chen, Yong Jun

    To explain the multi-wavelength light curves (from radio to X-ray) of HST-1 in the M87 jet, we propose an hourglass model that is a modified two-zone system of Tavecchio and Ghisellini (hereafter TG08): a slow hourglass-shaped or Laval-nozzle-shaped layer connected by two revolving exponential surfaces surrounding a fast spine through which plasma blobs flow. Based on the conservation of magnetic flux, the magnetic field changes along the axis of the hourglass. We adopt the result of TG08—the high-energy emission from GeV to TeV can be produced through inverse Compton by the two-zone system, and the photons from radio to X-raymore » are mainly radiated by the fast inner zone system. Here, we only discuss the light curves of the fast inner blob from radio to X-ray. When a compressible blob travels down the axis of the first bulb in the hourglass, because of magnetic flux conservation, its cross section experiences an adiabatic compression process, which results in particle acceleration and the brightening of HST-1. When the blob moves into the second bulb of the hourglass, because of magnetic flux conservation, the dimming of the knot occurs along with an adiabatic expansion of its cross section. A similar broken exponential function could fit the TeV peaks in M87, which may imply a correlation between the TeV flares of M87 and the light curves from radio to X-ray in HST-1. The Very Large Array (VLA) 22 GHz radio light curve of HST-1 verifies our prediction based on the model fit to the main peak of the VLA 15 GHz radio one.« less

  8. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  9. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  10. Zseq: An Approach for Preprocessing Next-Generation Sequencing Data.

    PubMed

    Alkhateeb, Abedalrhman; Rueda, Luis

    2017-08-01

    Next-generation sequencing technology generates a huge number of reads (short sequences), which contain a vast amount of genomic data. The sequencing process, however, comes with artifacts. Preprocessing of sequences is mandatory for further downstream analysis. We present Zseq, a linear method that identifies the most informative genomic sequences and reduces the number of biased sequences, sequence duplications, and ambiguous nucleotides. Zseq finds the complexity of the sequences by counting the number of unique k-mers in each sequence as its corresponding score and also takes into the account other factors such as ambiguous nucleotides or high GC-content percentage in k-mers. Based on a z-score threshold, Zseq sweeps through the sequences again and filters those with a z-score less than the user-defined threshold. Zseq algorithm is able to provide a better mapping rate; it reduces the number of ambiguous bases significantly in comparison with other methods. Evaluation of the filtered reads has been conducted by aligning the reads and assembling the transcripts using the reference genome as well as de novo assembly. The assembled transcripts show a better discriminative ability to separate cancer and normal samples in comparison with another state-of-the-art method. Moreover, de novo assembled transcripts from the reads filtered by Zseq have longer genomic sequences than other tested methods. Estimating the threshold of the cutoff point is introduced using labeling rules with optimistic results.

  11. An improved method to set significance thresholds for β diversity testing in microbial community comparisons.

    PubMed

    Gülay, Arda; Smets, Barth F

    2015-09-01

    Exploring the variation in microbial community diversity between locations (β diversity) is a central topic in microbial ecology. Currently, there is no consensus on how to set the significance threshold for β diversity. Here, we describe and quantify the technical components of β diversity, including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community. The proposed technique was applied to several in silico generated operational taxonomic unit (OTU) libraries and experimental 16S rRNA pyrosequencing libraries. The latter represented microbial communities from different biological rapid sand filters at a full-scale waterworks. We observe that β diversity, after subsampling, is inflated by intra-sample differences; this inflation is avoided in the proposed method. In addition, microbial community evenness (Gini > 0.08) strongly affects all β diversity estimations due to bias associated with rarefaction. Where published methods to test β significance often fail, the proposed meta-community-based estimator is more successful at rejecting insignificant β diversity values. Applying our approach, we reveal the heterogeneous microbial structure of biological rapid sand filters both within and across filters. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.

  12. Galactic Pile-Up (Artist Concept)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This artist's concept illustrates one of the largest smash-ups of galaxies ever observed. NASA's Spitzer Space Telescope spotted the four galaxies shown here (yellow blobs) in the process of tangling and ultimately merging into a single gargantuan galaxy. Though the galaxies appear to be fairly intact, gravitational disturbances have caused them to distort and twist, flinging stars (white dots) everywhere like sand. Other nearby galaxies can be seen as small, bluish blobs.

    The so-called 'quadruple merger' is the largest known merger between galaxies of a similar size. While three of galaxies are about the size of our Milky Way, the fourth (center of image) is three times as big. All four of the galaxies are blob-shaped ellipticals instead of spirals like the Milky Way.

    The plume shown emanating from the biggest galaxy contains billions of stray stars -- almost three times as many as are in the Milky Way -- kicked out during the merger. About half of the stars in the plume will fall back and join the new galaxy, making it one of the biggest galaxies in the universe.

    The quadruple merger is part of a giant galaxy cluster, called CL0958+4702, located nearly five billion light-years away.

  13. Neurochemical responses to chromatic and achromatic stimuli in the human visual cortex.

    PubMed

    Bednařík, Petr; Tkáč, Ivan; Giove, Federico; Eberly, Lynn E; Deelchand, Dinesh K; Barreto, Felipe R; Mangia, Silvia

    2018-02-01

    In the present study, we aimed at determining the metabolic responses of the human visual cortex during the presentation of chromatic and achromatic stimuli, known to preferentially activate two separate clusters of neuronal populations (called "blobs" and "interblobs") with distinct sensitivity to color or luminance features. Since blobs and interblobs have different cytochrome-oxidase (COX) content and micro-vascularization level (i.e., different capacities for glucose oxidation), different functional metabolic responses during chromatic vs. achromatic stimuli may be expected. The stimuli were optimized to evoke a similar load of neuronal activation as measured by the bold oxygenation level dependent (BOLD) contrast. Metabolic responses were assessed using functional 1 H MRS at 7 T in 12 subjects. During both chromatic and achromatic stimuli, we observed the typical increases in glutamate and lactate concentration, and decreases in aspartate and glucose concentration, that are indicative of increased glucose oxidation. However, within the detection sensitivity limits, we did not observe any difference between metabolic responses elicited by chromatic and achromatic stimuli. We conclude that the higher energy demands of activated blobs and interblobs are supported by similar increases in oxidative metabolism despite the different capacities of these neuronal populations.

  14. Discovery of Ram-pressure Stripped Gas around an Elliptical Galaxy in Abell 2670

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheen, Yun-Kyeong; Kim, Minjin; Smith, Rory

    Studies of cluster galaxies are increasingly finding galaxies with spectacular one-sided tails of gas and young stars, suggestive of intense ram-pressure stripping. These so-called “jellyfish” galaxies typically have late-type morphology. In this paper, we present Multi Unit Spectroscopic Explorer (MUSE) observations of an elliptical galaxy in Abell 2670 with long tails of material visible in the optical spectra, as well as blobs with tadpole-like morphology. The spectra in the central part of the galaxy reveal a stellar component as well as ionized gas. The stellar component does not have significant rotation, while the ionized gas defines a clear star-forming gasmore » disk. We argue, based on deep optical images of the galaxy, that the gas was most likely acquired during a past wet merger. It is possible that the star-forming blobs are also remnants of the merger. In addition, the direction and kinematics of the one-sided ionized tails, combined with the tadpole morphology of the star-forming blobs, strongly suggests that the system is undergoing ram pressure from the intracluster medium. In summary, this paper presents the discovery of a post-merger elliptical galaxy undergoing ram-pressure stripping.« less

  15. Characteristics of low-latitude ionospheric depletions and enhancements during solar minimum

    NASA Astrophysics Data System (ADS)

    Haaser, R. A.; Earle, G. D.; Heelis, R. A.; Klenzing, J.; Stoneback, R.; Coley, W. R.; Burrell, A. G.

    2012-10-01

    Under the waning solar minimum conditions during 2009 and 2010, the Ion Velocity Meter, part of the Coupled Ion Neutral Dynamics Investigation aboard the Communication/Navigation Outage Forecasting System satellite, is used to measure in situ nighttime ion densities and drifts at altitudes between 400 and 550 km during the hours 21:00-03:00 solar local time. A new approach to detecting and classifying well-formed ionospheric plasma depletions and enhancements (bubbles and blobs) with scale sizes between 50 and 500 km is used to develop geophysical statistics for the summer, winter, and equinox seasons during the quiet solar conditions. Some diurnal and seasonal geomagnetic distribution characteristics confirm previous work on equatorial irregularities and scintillations, while other elements reveal new behaviors that will require further investigation before they may be fully understood. Events identified in the study reveal very different and often opposite behaviors of bubbles and blobs during solar minimum. In particular, more bubbles demonstrating deeper density fluctuations and faster perturbation plasma drifts typically occur earlier near the magnetic equator, while blobs of similar magnitude occur more often far away from the geomagnetic equator closer to midnight.

  16. Density Limit due to SOL Convection

    NASA Astrophysics Data System (ADS)

    D'Ippolito, D. A.; Myra, J. R.; Russell, D. A.

    2004-11-01

    Recent measurements on C-Mod(M. Greenwald, Plasma Phys. Contr. Fusion 44), R27 (2002). suggest there is a density limit due to rapid convection in the SOL: this region starts in the far SOL but expands inward to the separatrix as the density approaches the Greenwald limit. This idea is supported by a recent analysis(D. A. Russell et al., Lodestar Report LRC-04-99 (2004).) of a 3D BOUT code turbulence simulation(X. Q. Xu et al., Bull. APS 48), 184 (2003), paper KP1-20. with neutral fueling of the X-point region. Our work suggests that rapid outwards convection of plasma by turbulent coherent structures (``blobs'') occurs when the X-point collisionality is sufficiently large. Here, we calculate a density limit due to loss of thermal equilibrium in the edge plasma due to rapid radial convective heat transport. We expect a synergistic effect between blob convection and X-point cooling. The cooling increases the parallel resistivity at the X-point, ``disconnects'' the blobs electrically from the sheaths, and increases their radial velocity,(D.A. D'Ippolito et al., 2004 Sherwood Meeting, paper 1C 43.) which in turn further cools the X-points. Progress on a theoretical model will be reported.

  17. Discovery of Ram-pressure Stripped Gas around an Elliptical Galaxy in Abell 2670

    NASA Astrophysics Data System (ADS)

    Sheen, Yun-Kyeong; Smith, Rory; Jaffé, Yara; Kim, Minjin; Yi, Sukyoung K.; Duc, Pierre-Alain; Nantais, Julie; Candlish, Graeme; Demarco, Ricardo; Treister, Ezequiel

    2017-05-01

    Studies of cluster galaxies are increasingly finding galaxies with spectacular one-sided tails of gas and young stars, suggestive of intense ram-pressure stripping. These so-called “jellyfish” galaxies typically have late-type morphology. In this paper, we present Multi Unit Spectroscopic Explorer (MUSE) observations of an elliptical galaxy in Abell 2670 with long tails of material visible in the optical spectra, as well as blobs with tadpole-like morphology. The spectra in the central part of the galaxy reveal a stellar component as well as ionized gas. The stellar component does not have significant rotation, while the ionized gas defines a clear star-forming gas disk. We argue, based on deep optical images of the galaxy, that the gas was most likely acquired during a past wet merger. It is possible that the star-forming blobs are also remnants of the merger. In addition, the direction and kinematics of the one-sided ionized tails, combined with the tadpole morphology of the star-forming blobs, strongly suggests that the system is undergoing ram pressure from the intracluster medium. In summary, this paper presents the discovery of a post-merger elliptical galaxy undergoing ram-pressure stripping.

  18. Modifications to intermittent turbulent structures by sheared flow in LAPD

    NASA Astrophysics Data System (ADS)

    Rossi, Giovanni; Schaffner, David; Carter, Troy; Guice, Danny; Bengtson, Roger

    2012-10-01

    Turbulence in the edge of the Large Plasma Device is generally observed to be intermittent with the production of filamentary structures. Density-enhancement events (called ``blobs'') are localized to the region radially outside the edge of the cathode source while density-depletion events (called ``holes'') are localized to the region radially inward. A flow-shear layer is also observed to be localized to this same spatial region. Control over the edge flow and shear in LAPD is now possible using a biasable limiter. Edge intermittency is observed to be strongly affected by variations in the edge flow, with intermittency (as measured by skewness of the fluctuation amplitude PDF) increasing with edge flow (in either direction) and reaching a minimum when spontaneous edge flow is zeroed-out using biasing. This trend is counter to the observed changes in turbulent particle flux, which peaks at low flow/shear. Two-dimensional cross-conditional averaging confirms the blobs to be detached filamentary structures with a clear dipolar potential structure and a geometry also dependent on the magnitude of sheared flow. More detailed measurements are made to connect the occurrence of these blobs to observed flow-driven coherent modes and their contribution to radial particle flux.

  19. Cold blobs of protons in Jupiter's outer magnetosphere as observed by Juno's JADE

    NASA Astrophysics Data System (ADS)

    Wilson, R. J.; Bagenal, F.; Valek, P. W.; Allegrini, F.; Angold, N. G.; Chae, K.; Ebert, R. W.; Kim, T. K. H.; Loeffler, C.; Louarn, P.; McComas, D. J.; Pollock, C. J.; Ranquist, D. A.; Reno, C.; Szalay, J. R.; Thomsen, M. F.; Weidner, S.; Bolton, S. J.; Levin, S.

    2017-12-01

    Juno's 53-day polar orbits cut through the equatorial plane when inbound to perijove. The JADE instrument has been observing thermal ions (0.01-50 keV/q) and electrons (0.1-100 keV/q) in these regions since Orbit 05. Even at distances greater than 70 RJ, magnetodisk crossings are clear with high count rates measured before returning to rarified plasma conditions outside the disk. However JADE's detectors observes regions of slightly greater ion counts that last for about an hour. The ion counts are too low to analyze at the typical 30s or 60s low rate instrument cadence, but by summing to 10-minute resolution the features become analyzable. We find these regions are populated with protons with higher density than those typically observed outside the magnetodisk, and that they are colder than the ambient plasma. Reanalysis of Voyager data (DOI: 10.1002/2017JA024053) also showed cold dense blobs of plasma in the inner to middle magnetosphere, however these were of heavier ion species, short lived (several minutes) and within 40 RJ of Jupiter. This presentation will investigate the JADE identified cold blobs observed to date and compare with those observed with Voyager.

  20. Hemispheric Asymmetry in Transition from Equatorial Plasma Bubble to Blob as Deduced from 630.0 nm Airglow Observations at Low Latitudes

    NASA Technical Reports Server (NTRS)

    Park, Jaeheung; Martinis, Carlos R.; Luehr, Hermann; Pfaff, Robert F.; Kwak, Young-Sil

    2016-01-01

    Transitions from depletions to enhancements of 630.0 nm nighttime airglow have been observed at Arecibo. Numerical simulations by Krall et al. (2009) predicted that they should occur only in one hemisphere, which has not yet been confirmed observationally. In this study we investigate the hemispheric conjugacy of the depletion-to-enhancement transition using multiple instruments. We focus on one event observed in the American longitude sector on 22 December 2014: 630.0 nm airglow depletions evolved into enhancements in the Northern Hemisphere while the evolution did not occur in the conjugate location in the Southern Hemisphere. Concurrent plasma density measured by low Earth orbit (LEO) satellites and 777.4 nm airglow images support that the depletions and enhancements of 630.0 nm night time airglow reflect plasma density decreases and increases (blobs), respectively. Characteristics of the airglow depletions, in the context of the LEO satellite data, further suggest that the plasma density depletion deduced from the airglow data represents equatorial plasma bubbles (EPBs) rather than medium-scale traveling ionospheric disturbances from midlatitudes. Hence, the event in this study can be interpreted as EPB-to-blob transition.

  1. Sound frequency affects speech emotion perception: results from congenital amusia

    PubMed Central

    Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718

  2. Laser damage threshold measurements of microstructure-based high reflectors

    NASA Astrophysics Data System (ADS)

    Hobbs, Douglas S.

    2008-10-01

    In 2007, the pulsed laser induced damage threshold (LIDT) of anti-reflecting (AR) microstructures built in fused silica and glass was shown to be up to three times greater than the LIDT of single-layer thin-film AR coatings, and at least five times greater than multiple-layer thin-film AR coatings. This result suggested that microstructure-based wavelength selective mirrors might also exhibit high LIDT. Efficient light reflection over a narrow spectral range can be produced by an array of sub-wavelength sized surface relief microstructures built in a waveguide configuration. Such surface structure resonant (SSR) filters typically achieve a reflectivity exceeding 99% over a 1-10nm range about the filter center wavelength, making SSR filters useful as laser high reflectors (HR). SSR laser mirrors consist of microstructures that are first etched in the surface of fused silica and borosilicate glass windows and subsequently coated with a thin layer of a non-absorbing high refractive index dielectric material such as tantalum pent-oxide or zinc sulfide. Results of an initial investigation into the LIDT of single layer SSR laser mirrors operating at 532nm, 1064nm and 1573nm are described along with data from SEM analysis of the microstructures, and spectral reflection measurements. None of the twelve samples tested exhibited damage thresholds above 3 J/cm2 when illuminated at the resonant wavelength, indicating that the simple single layer, first order design will need further development to be suitable for high power laser applications. Samples of SSR high reflectors entered in the Thin Film Damage Competition also exhibited low damage thresholds of less than 1 J/cm2 for the ZnS coated SSR, and just over 4 J/cm2 for the Ta2O5 coated SSR.

  3. Perceptual precision of passive body tilt is consistent with statistically optimal cue integration

    PubMed Central

    Karmali, Faisal; Nicoucar, Keyvan; Merfeld, Daniel M.

    2017-01-01

    When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information, matching maximum-likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5 Hz in six normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5 Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, whereas the potential contribution of nonvestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without nonvestibular cues) can explain the observed performance enhancements as can a model that includes nonvestibular contributions. NEW & NOTEWORTHY We found that human whole body self-motion direction-recognition thresholds measured during dynamic roll tilts were significantly lower than those predicted by a conventional maximum-likelihood weighting of the roll angular velocity and quasistatic roll tilt cues. Here, we show that two models can each match this “apparent” better-than-optimal performance: 1) inclusion of a somatosensory contribution and 2) inclusion of a dynamic sensory interaction between canal and otolith cues via a Kalman filter model. PMID:28179477

  4. Morphological filtering and multiresolution fusion for mammographic microcalcification detection

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Chen, Chang W.; Parker, Kevin J.

    1997-04-01

    Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.

  5. An ultra-low-power filtering technique for biomedical applications.

    PubMed

    Zhang, Tan-Tan; Mak, Pui-In; Vai, Mang-I; Mak, Peng-Un; Wan, Feng; Martins, R P

    2011-01-01

    This paper describes an ultra-low-power filtering technique for biomedical applications designated as T-wave sensing in heart-activities detection systems. The topology is based on a source-follower-based Biquad operating in the sub-threshold region. With the intrinsic advantages of simplicity and high linearity of the source-follower, ultra-low-cutoff filtering can be achieved, simultaneously with ultra low power and good linearity. An 8(th)-order 2.4-Hz lowpass filter design example optimized in a 0.35-μm CMOS process was designed achieving over 85-dB dynamic range, 74-dB stopband attenuation and consuming only 0.36 nW at a 3-V supply.

  6. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  7. A median-Gaussian filtering framework for Moiré pattern noise removal from X-ray microscopy image.

    PubMed

    Wei, Zhouping; Wang, Jian; Nichol, Helen; Wiebe, Sheldon; Chapman, Dean

    2012-02-01

    Moiré pattern noise in Scanning Transmission X-ray Microscopy (STXM) imaging introduces significant errors in qualitative and quantitative image analysis. Due to the complex origin of the noise, it is difficult to avoid Moiré pattern noise during the image data acquisition stage. In this paper, we introduce a post-processing method for filtering Moiré pattern noise from STXM images. This method includes a semi-automatic detection of the spectral peaks in the Fourier amplitude spectrum by using a local median filter, and elimination of the spectral noise peaks using a Gaussian notch filter. The proposed median-Gaussian filtering framework shows good results for STXM images with the size of power of two, if such parameters as threshold, sizes of the median and Gaussian filters, and size of the low frequency window, have been properly selected. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  9. Local noise reduction for emphysema scoring in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Schilham, Arnold; Prokop, Mathias; Gietema, Hester; van Ginneken, Bram

    2005-04-01

    Computed Tomography (CT) has become the new reference standard for quantification of emphysema. The most popular measure for emphysema derived from CT is the Pixel Index (PI), which expresses the fraction of the lung volume with abnormally low intensity values. As PI is calculated from a single, fixed threshold on intensity, this measure is strongly influenced by noise. This effect shows up clearly when comparing the PI score for a high-dose scan to the PI score for a low-dose (i.e. noisy) scan of the same subject. This paper presents a class of noise filters that make use of a local noise estimate to specify the filtering strength: Local Noise Variance Weighted Averaging (LNVWA). The performance of the filter is assessed by comparing high-dose and low-dose PI scores for 11 subjects. LNVWA improves the reproducibility of high-dose PI scores: For an emphysema threshold of -910 HU, the root-mean-square difference in PI score drops from 10% of the lung volume to 3.3% of the lung volume if LNVWA is used.

  10. Anti-dazzling protection for Air Force pilots

    NASA Astrophysics Data System (ADS)

    Donval, Ariela; Fisher, Tali; Lipman, Ofir; Oron, Moshe

    2012-06-01

    Under certain conditions, laser directed at aircraft can be a hazard. The most likely scenario is when a bright visible laser causes distraction or temporary flash blindness to a pilot, during a critical phase of flight such as landing or takeoff. It is also possible, that a visible or invisible beam could cause permanent harm to a pilot's eyes. We present a non-linear, solid-state dynamic filter solutions protecting from dazzling and damage in a passive way. Our filters limit the transmission, only if the power exceeds a certain threshold as opposed to spectral filters that block a certain wavelength permanently.

  11. A Lyα blob and zabs ≈ zem damped Lyα absorber in the dark matter halo of the binary quasar Q 0151+048

    NASA Astrophysics Data System (ADS)

    Zafar, T.; Møller, P.; Ledoux, C.; Fynbo, J. P. U.; Nilsson, K. K.; Christensen, L.; D'Odorico, S.; Milvang-Jensen, B.; Michałowski, M. J.; Ferreira, D. D. M.

    2011-08-01

    Context. Q 0151+048 is a physical quasar (QSO) pair at z ~ 1.929 with a separation of 3.3 arcsec on the sky. In the spectrum of the brighter member of this pair, Q 0151+048A, a damped Lyα absorber (DLA) is observed at a higher redshift. We have previously detected the host galaxies of both QSOs, as well as a Lyα blob whose emission surrounding Q 0151+048A extends over 5 × 3.3 arcsec. Aims: We seek to constrain the geometry of the system and understand the possible relations between the DLA, the Lyα blob, and the two QSOs. We also aim at characterizing the former two objects in more detail. Methods: To study the nature of the Lyα blob, we performed low-resolution, long-slit spectroscopy with the slit aligned with the extended emission. We also observed the whole system using the medium-resolution VLT/X-shooter spectrograph and the slit aligned with the two QSOs. The systemic redshift of both QSOs was determined from rest-frame optical emission lines redshifted into the NIR. We employed line-profile fitting technique, to measure metallicities and the velocity width of low-ionization metal absorption lines associated to the DLA and photo-ionization modeling to characterize the DLA further. Results: We measure systemic redshifts of zem(A) = 1.92924 ± 0.00036 and zem(B) = 1.92863 ± 0.00042 from the H β and H α emission lines, respectively. In other words, the two QSOs have identical redshifts within 2σ. From the width of Balmer emission lines and the strength of the rest-frame optical continuum, we estimate the masses of the black holes of the two QSOs to be 109.33 M⊙ and 108.38 M⊙ for Q 0151+048A and Q 0151+048B, respectively. We then use the correlation between black hole mass and dark matter halo mass to infer the mass of the dark matter halos hosting the two QSOs: 1013.74 M⊙ and 1013.13 M⊙ for Q 0151+048A and Q 0151+048B, respectively. We observe a velocity gradient along the major axis of the Lyα blob consistent with the rotation curve of a large disk galaxy, but it may also be caused by gas inflow or outflow. We detect residual continuum in the DLA trough, which we interpret as emission from the host galaxy of Q 0151+048A. The derived H0 column density of the DLA is log NH0 = 20.34 ± 0.02 cm-2. Metal column densities are also determined for a number of low-ionization species resulting in an overall metallicity of 0.01 Z⊙. We detect C ii ∗ , which allows us to make a physical model of the DLA cloud. Conclusions: From the systemic redshifts of the QSOs, we conclude that the Lyα blob is associated with Q 0151+048A rather than with the DLA. The DLA must be located in front of both the Lyα blob and Q 0151+048A at a distance greater than 30 kpc and has a velocity relative to the blob of 640 ± 70 km s-1. The two quasars accrete at normal Eddington ratios. The DM halo of this double quasar will grow to the mass of our local supercluster at z = 0. We point out that those objects therefore form an ideal laboratory to study the physical interactions in a z = 2 precursor of our local supercluster. Based on observations done with i) European Southern Observatory (ESO) utilizing 8.2m Very Large Telescope (VLT) X-shooter spectrograph on Cerro Paranal in the Atacama Desert, northern Chile. ii) 2.56 m Nordic Optical Telescope (NOT), a scientific association between Denmark, Finland, Iceland, Norway and Sweden, operated at Observatorio del Roque de Los Muchachos on the island of La Palma, Spain.

  12. A comparative analysis of signal processing methods for motion-based rate responsive pacing.

    PubMed

    Greenhut, S E; Shreve, E A; Lau, C P

    1996-08-01

    Pacemakers that augment heart rate (HR) by sensing body motion have been the most frequently prescribed rate responsive pacemakers. Many comparisons between motion-based rate responsive pacemaker models have been published. However, conclusions regarding specific signal processing methods used for rate response (e.g., filters and algorithms) can be affected by device-specific features. To objectively compare commonly used motion sensing filters and algorithms, acceleration and ECG signals were recorded from 16 normal subjects performing exercise and daily living activities. Acceleration signals were filtered (1-4 or 15-Hz band-pass), then processed using threshold crossing (TC) or integration (IN) algorithms creating four filter/algorithm combinations. Data were converted to an acceleration indicated rate and compared to intrinsic HR using root mean square difference (RMSd) and signed RMSd. Overall, the filters and algorithms performed similarly for most activities. The only differences between filters were for walking at an increasing grade (1-4 Hz superior to 15-Hz) and for rocking in a chair (15-Hz superior to 1-4 Hz). The only differences between algorithms were for bicycling (TC superior to IN), walking at an increasing grade (IN superior to TC), and holding a drill (IN superior to TC). Performance of the four filter/algorithm combinations was also similar over most activities. The 1-4/IN (filter [Hz]/algorithm) combination performed best for walking at a grade, while the 15/TC combination was best for bicycling. However, the 15/TC combination tended to be most sensitive to higher frequency artifact, such as automobile driving, downstairs walking, and hand drilling. Chair rocking artifact was highest for 1-4/IN. The RMSd for bicycling and upstairs walking were large for all combinations, reflecting the nonphysiological nature of the sensor. The 1-4/TC combination demonstrated the least intersubject variability, was the only filter/algorithm combination insensitive to changes in footwear, and gave similar RMSd over a large range of amplitude thresholds for most activities. In conclusion, based on overall error performance, the preferred filter/algorithm combination depended upon the type of activity.

  13. Learning to Identify Near-Threshold Luminance-Defined and Contrast-Defined Letters in Observers with Amblyopia

    PubMed Central

    Chung, Susana T.L.; Li, Roger W.; Levi, Dennis M.

    2008-01-01

    We assessed whether or not the sensitivity for identifying luminance-defined and contrast-defined letters improved with training in a group of amblyopic observers who have passed the critical period of development. In Experiment 1, we tracked the contrast threshold for identifying luminance-defined letters with training in a group of 11 amblyopic observers. Following training, six observers showed a reduction in thresholds, averaging 20%, for identifying luminance-defined letters. This improvement transferred extremely well to the untrained task of identifying contrast-defined letters (average improvement = 38%) but did not transfer to an acuity measurement. Seven of the 11 observers were subsequently trained on identifying contrast-defined letters in Experiment 2. Following training, five of these seven observers demonstrated a further improvement, averaging 17%, for identifying contrast-defined letters. This improvement did not transfer to the untrained task of identifying luminance-defined letters. Our findings are consistent with predictions based on the locus of learning for first- and second-order stimuli according to the filter-rectifier-filter model for second-order visual processing. PMID:18824189

  14. Topological Characteristics of the Hong Kong Stock Market: A Test-based P-threshold Approach to Understanding Network Complexity

    NASA Astrophysics Data System (ADS)

    Xu, Ronghua; Wong, Wing-Keung; Chen, Guanrong; Huang, Shuo

    2017-02-01

    In this paper, we analyze the relationship among stock networks by focusing on the statistically reliable connectivity between financial time series, which accurately reflects the underlying pure stock structure. To do so, we firstly filter out the effect of market index on the correlations between paired stocks, and then take a t-test based P-threshold approach to lessening the complexity of the stock network based on the P values. We demonstrate the superiority of its performance in understanding network complexity by examining the Hong Kong stock market. By comparing with other filtering methods, we find that the P-threshold approach extracts purely and significantly correlated stock pairs, which reflect the well-defined hierarchical structure of the market. In analyzing the dynamic stock networks with fixed-size moving windows, our results show that three global financial crises, covered by the long-range time series, can be distinguishingly indicated from the network topological and evolutionary perspectives. In addition, we find that the assortativity coefficient can manifest the financial crises and therefore can serve as a good indicator of the financial market development.

  15. Stochastic modelling of intermittent fluctuations in the scrape-off layer: Correlations, distributions, level crossings, and moment estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.

    A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less

  16. Relationship between frequency power spectra and intermittent, large-amplitude bursts in the Alcator C-Mod scrape-off layer

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.; Kube, R.; LaBombard, B.; Terry, J. L.

    2017-11-01

    Fluctuations in the boundary region of the Alcator C-Mod tokamak have been analyzed using gas puff imaging data from a set of Ohmically heated plasma density scan experiments. It is found that the relative fluctuation amplitudes are modest and close to normally distributed at the separatrix but become increasingly larger and skewed towards the main chamber wall. The frequency power spectra are nevertheless similar for all radial positions and line-averaged densities. Predictions of a stochastic model, describing the plasma fluctuations as a super-position of uncorrelated pulses, are shown to be in excellent agreement with the measurements. This implies that the pulse duration is the same, while the degree of pulse overlap decreases radially outwards in the scrape-off layer. The universal frequency power spectral density is thus determined by the shape and duration of the large-amplitude bursts associated with blob-like structures. The model also describes the rate of threshold level crossings, for which the exponential tails underline the intermittency of the fluctuations in the far scarpe-off layer.

  17. Astrophysical laser operating in the OI 8446-Åline in the Weigelt blobs of η Carinae

    NASA Astrophysics Data System (ADS)

    Johansson, S.; Letokhov, V. S.

    2005-12-01

    Within the framework of a simple model of photophysical processes in the Weigelt blobs in the vicinity of the luminous blue variable (LBV) star η Carinae, we explain the presence of the fluorescent 8446-Åand forbidden [OI] 6300-Ålines as well as the absence of the allowed OI 7774-Åline in spectra recorded with the Hubble Space Telescope (HST)/STIS instrument (Gull et al.). From atomic data and estimated stellar parameters we demonstrate that there is a population inversion and stimulated emission in the 3p3P-3s3S transition λ8446 due to photoexcitation by accidental resonance (PAR) by H Lyβ radiation.

  18. Inferior Vena Cava Filtration in the Management of Venous Thromboembolism: Filtering the Data

    PubMed Central

    Molvar, Christopher

    2012-01-01

    Venous thromboembolism (VTE) is a common cause of morbidity and mortality. This is especially true for hospitalized patients. Pulmonary embolism (PE) is the leading preventable cause of in-hospital mortality. The preferred method of both treatment and prophylaxis for VTE is anticoagulation. However, in a subset of patients, anticoagulation therapy is contraindicated or ineffective, and these patients often receive an inferior vena cava (IVC) filter. The sole purpose of an IVC filter is prevention of clinically significant PE. IVC filter usage has increased every year, most recently due to the availability of retrievable devices and a relaxation of thresholds for placement. Much of this recent growth has occurred in the trauma patient population given the high potential for VTE and frequent contraindication to anticoagulation. Retrievable filters, which strive to offer the benefits of permanent filters without time-sensitive complications, come with a new set of challenges including methods for filter follow-up and retrieval. PMID:23997414

  19. A robust nonlinear filter for image restoration.

    PubMed

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  20. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.

    PubMed

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei

    2018-04-08

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  1. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm

    PubMed Central

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei

    2018-01-01

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry–Perot (F–P) filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed. PMID:29642507

  2. Hubble Sees Turquoise-Tinted Plumes in Large Magellanic Cloud

    NASA Image and Video Library

    2017-12-08

    The brightly glowing plumes seen in this image are reminiscent of an underwater scene, with turquoise-tinted currents and nebulous strands reaching out into the surroundings. However, this is no ocean. This image actually shows part of the Large Magellanic Cloud (LMC), a small nearby galaxy that orbits our galaxy, the Milky Way, and appears as a blurred blob in our skies. The NASA/European Space Agency (ESA) Hubble Space Telescope has peeked many times into this galaxy, releasing stunning images of the whirling clouds of gas and sparkling stars (opo9944a, heic1301, potw1408a). This image shows part of the Tarantula Nebula's outskirts. This famously beautiful nebula, located within the LMC, is a frequent target for Hubble (heic1206, heic1402). In most images of the LMC the color is completely different to that seen here. This is because, in this new image, a different set of filters was used. The customary R filter, which selects the red light, was replaced by a filter letting through the near-infrared light. In traditional images, the hydrogen gas appears pink because it shines most brightly in the red. Here however, other less prominent emission lines dominate in the blue and green filters. This data is part of the Archival Pure Parallel Project (APPP), a project that gathered together and processed over 1,000 images taken using Hubble’s Wide Field Planetary Camera 2, obtained in parallel with other Hubble instruments. Much of the data in the project could be used to study a wide range of astronomical topics, including gravitational lensing and cosmic shear, exploring distant star-forming galaxies, supplementing observations in other wavelength ranges with optical data, and examining star populations from stellar heavyweights all the way down to solar-mass stars. Image Credit: ESA/Hubble & NASA: acknowledgement: Josh Barrington NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. Impact of Canopy Decoupling and Subcanopy Advection on the Annual Carbon Balance of a Boreal Scots Pine Forest as Derived From Eddy Covariance

    NASA Astrophysics Data System (ADS)

    Jocher, Georg; Marshall, John; Nilsson, Mats B.; Linder, Sune; De Simon, Giuseppe; Hörnlund, Thomas; Lundmark, Tomas; Näsholm, Torgny; Ottosson Löfvenius, Mikaell; Tarvainen, Lasse; Wallin, Göran; Peichl, Matthias

    2018-02-01

    Apparent net uptake of carbon dioxide (CO2) during wintertime by an ˜ 90 year old Scots pine stand in northern Sweden led us to conduct canopy decoupling and subcanopy advection investigations over an entire year. Eddy covariance (EC) measurements ran simultaneously above and within the forest canopy for that purpose. We used the correlation of above- and below-canopy standard deviation of vertical wind speed (σw) as decoupling indicator. We identified 0.33 m s-1 and 0.06 m s-1 as site-specific σw thresholds for above- and below-canopy coupling during nighttime (global radiation <20 W m-2) and 0.23 m s-1 and 0.06 m s-1 as daytime (global radiation >20 W m-2) σw thresholds. Decoupling occurred in 53% of the annual nighttime and 14% of the annual daytime. The annual net ecosystem exchange (NEE), gross ecosystem exchange (GEE), and ecosystem respiration (Reco) derived via two-level filtered EC data were -357 g C m-2, -1,138 g C m-2, and 781 g C m-2, respectively. In comparison, both single-level friction velocity (u*) and quality filtering resulted in 22% higher NEE, mainly caused by 16% lower Reco. GEE remained similar among filtering regimes. Accounting for changes of CO2 storage across the canopy in the single-level filtered data could only marginally decrease these discrepancies. Consequently, advection appears to be responsible for the major part of this divergence. We conclude that the two-level filter is necessary to adequately address decoupling and subcanopy advection at our site, and we recommend this filter for all forested EC sites.

  4. Follow-up FOCAS Spectroscopy for [O iii] Blobs at z 0.7

    NASA Astrophysics Data System (ADS)

    Yuma, Suraphong

    2014-01-01

    We propose FOCAS spectroscopy for our eight newly selected [O_iii] blobs at z~0.7, showing remarkably extended [O_iii] emission larger than 30 kpc down to 1.2x10^{-18} erg^{-1}cm^{-2} arcsec^{-2} in continuum-subtracted narrowband images. This extended oxygen nebulae beyond stellar component is thought to be hot metal-right gas outflowing from galaxies. However, without spectroscopy to verify gas motion of the system, we cannot certainly conclude that the extended feature of [O_iii] emission is caused by gas outflow. With FOCAS, we expect to observe Fe_ii, Mg_ii absorption lines and [O_ii}], Hbeta, and [O_iii] emission lines, which all fall into optical window at this redshift. We will 1) confirm the outflow of these blobs through Fe_ii and/or Mg_ii absorption lines, 2) constrain energy source of the outflow (AGN or stellar feedback) through line-ratio diagnostic diagram, and 3) for the first time investigate if the extended oxygen emission is just due to the photo-ionized outflowing gas or involving shock heating process through [O_ii]/[O_iii] ratios in extended regions. The last goal can only be accomplished with FOCAS optical spectroscopy, which can observe both [O_ii] and [O_iii] emission lines simultaneously.

  5. Eta Carinae: Linelist for the Emission Spectrum of the Weigelt Blobs in the 1700-10400Angstrom Wavelength Region

    NASA Technical Reports Server (NTRS)

    Zethson, T.; Johansson, S.; Hartman, H.; Gull, T. R.

    2011-01-01

    Aims. We present line identifications in the 1700 to 10400A region for the Weigelt Blobs B and D, located 0.1 to 0.3" NNW of Eta Carinae. The aim of this work is to characterize the behavior of these luminous, dense gas condensations in response to the broad maximum and short minimum states of Eta Carinae during its 5.54-year spectroscopic period. Methods. The observations were carried out during March 1998, the minimum spectrum, and in February 1999, early maximum spectrum, with the Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) from 1640 to 10400A using the 52"x0.1" aperture centered on Eta Carinae at position angle -28 degrees. Extractions of the reduced spectrum centered on Weigelt B and D, 0.28: in length along the slit, were used to identify the narrow, nebular emission lines, measure their wavelengths and estimate their fluxes. Results. A linelist of 1500 lines is presented for the maximum and minimum states of combined Weigelt blobs B and D. The spectra are dominated by emission lines from the iron-group elements, but include lines from lighter elements. They include parity permitted and forbidden lines. A number of lines are fluorescent lines pumped by H Ly alpha. Other lines show anomalous excitation.

  6. Textural evidence for high-grade ignimbrites formed by low-explosivity eruptions, Paraná Magmatic Province, southern Brazil

    NASA Astrophysics Data System (ADS)

    Luchetti, Ana Carolina F.; Gravley, Darren M.; Gualda, Guilherme A. R.; Nardy, Antonio J. R.

    2018-04-01

    The Paraná-Etendeka Province is a Lower Cretaceous huge bimodal tholeiitic volcanic province (1 million·km3) that predated the Gondwana breakup. Its silicic portion makes up a total volume of at least 20,000 km3 and in southern Brazil it comprises the Chapecó porphyritic high-Ti trachydacites-dacites and the Palmas microporphyritic-aphyric low-Ti dacites-rhyolites. The widespread silicic sheets are debated in the literature because they bear similarities between lavas and high grade ignimbrites. Here we provide new observations and interpretations for flow units with large, dark, and vesicle-poor lens-shaped blobs surrounded by a light-colored matrix. The textural features (macro- to micro-scale) of these blobs are different from typical pumice and/or fiamme and support a low explosivity pyroclastic origin, possibly low-column fountain eruptions with discharge rates high enough to produce laterally extensive high-grade ignimbrites. Such an interpretation, combined with a conspicuous absence of lithic fragments in the deposits, is aligned with a lack of identified calderas in the Paraná-Etendeka Province. Maximum timescales of crystallization associated with the juvenile blobs and estimated from CSD slopes are on the order of millennia for phenocryst populations and on the order of decades for microphenocryst populations.

  7. Pedestal and edge electrostatic turbulence characteristics from an XGC1 gyrokinetic simulation

    NASA Astrophysics Data System (ADS)

    Churchill, R. M.; Chang, C. S.; Ku, S.; Dominski, J.

    2017-10-01

    Understanding the multi-scale neoclassical and turbulence physics in the edge region (pedestal + scrape-off layer (SOL)) is required in order to reliably predict performance in future fusion devices. We explore turbulent characteristics in the edge region from a multi-scale neoclassical and turbulent XGC1 gyrokinetic simulation in a DIII-D like tokamak geometry, here excluding neutrals and collisions. For an H-mode type plasma with steep pedestal, it is found that the electron density fluctuations increase towards the separatrix, and stay high well into the SOL, reaching a maximum value of δ {n}e/{\\bar{n}}e˜ 0.18. Blobs are observed, born around the magnetic separatrix surface and propagate radially outward with velocities generally less than 1 km s-1. Strong poloidal motion of the blobs is also present, near 20 km s-1, consistent with E × B rotation. The electron density fluctuations show a negative skewness in the closed field-line pedestal region, consistent with the presence of ‘holes’, followed by a transition to strong positive skewness across the separatrix and into the SOL. These simulations indicate that not only neoclassical phenomena, but also turbulence, including the blob-generation mechanism, can remain important in the steep H-mode pedestal and SOL. Qualitative comparisons will be made to experimental observations.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, R.; Soler, R.; Terradas, J.

    Coronal rain clumps and prominence knots are dense condensations with chromospheric to transition region temperatures that fall down in the much hotter corona. Their typical speeds are in the range 30–150 km s{sup −1} and of the order of 10–30 km s{sup −1}, respectively, i.e., they are considerably smaller than free-fall velocities. These cold blobs contain a mixture of ionized and neutral material that must be dynamically coupled in order to fall together, as observed. We investigate this coupling by means of hydrodynamic simulations in which the coupling arises from the friction between ions and neutrals. The numerical simulations presentedmore » here are an extension of those of Oliver et al. to the partially ionized case. We find that, although the relative drift speed between the two species is smaller than 1 m s{sup −1} at the blob center, it is sufficient to produce the forces required to strongly couple charged particles and neutrals. The ionization degree has no discernible effect on the main results of our previous work for a fully ionized plasma: the condensation has an initial acceleration phase followed by a period with roughly constant velocity, and, in addition, the maximum descending speed is clearly correlated with the ratio of initial blob to environment density.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchill, R. M.; Chang, C. S.; Ku, S.

    Understanding the multi-scale neoclassical and turbulence physics in the edge region (pedestal + scrape-off layer (SOL)) is required in order to reliably predict performance in future fusion devices. We explore turbulent characteristics in the edge region from a multi-scale neoclassical and turbulent XGC1 gyrokinetic simulation in a DIII-D like tokamak geometry, here excluding neutrals and collisions. For an H-mode type plasma with steep pedestal, it is found that the electron density fluctuations increase towards the separatrix, and stay high well into the SOL, reaching a maximum value ofmore » $$\\delta {n}_{e}/{\\bar{n}}_{e}\\sim 0.18$$. Blobs are observed, born around the magnetic separatrix surface and propagate radially outward with velocities generally less than 1 km s –1. Strong poloidal motion of the blobs is also present, near 20 km s –1, consistent with E × B rotation. The electron density fluctuations show a negative skewness in the closed field-line pedestal region, consistent with the presence of 'holes', followed by a transition to strong positive skewness across the separatrix and into the SOL. These simulations indicate that not only neoclassical phenomena, but also turbulence, including the blob-generation mechanism, can remain important in the steep H-mode pedestal and SOL. Lastly, qualitative comparisons will be made to experimental observations.« less

  10. Self-adjusting threshold mechanism for pixel detectors

    NASA Astrophysics Data System (ADS)

    Heim, Timon; Garcia-Sciveres, Maurice

    2017-09-01

    Readout chips of hybrid pixel detectors use a low power amplifier and threshold discrimination to process charge deposited in semiconductor sensors. Due to transistor mismatch each pixel circuit needs to be calibrated individually to achieve response uniformity. Traditionally this is addressed by programmable threshold trimming in each pixel, but requires robustness against radiation effects, temperature, and time. In this paper a self-adjusting threshold mechanism is presented, which corrects the threshold for both spatial inequality and time variation and maintains a constant response. It exploits the electrical noise as relative measure for the threshold and automatically adjust the threshold of each pixel to always achieve a uniform frequency of noise hits. A digital implementation of the method in the form of an up/down counter and combinatorial logic filter is presented. The behavior of this circuit has been simulated to evaluate its performance and compare it to traditional calibration results. The simulation results show that this mechanism can perform equally well, but eliminates instability over time and is immune to single event upsets.

  11. Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong

    2018-04-01

    Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.

  12. Analysis of a Failed Eclipse Plasma Ejection Using EUV Observations

    NASA Astrophysics Data System (ADS)

    Tavabi, E.; Koutchmy, S.; Bazin, C.

    2018-03-01

    The photometry of eclipse white-light (W-L) images showing a moving blob is interpreted for the first time together with observations from space with the PRoject for On Board Autonomy (PROBA-2) mission (ESA). An off-limb event seen with great details in W-L was analyzed with the SWAP imager ( Sun Watcher using Active pixel system detector and image Processing) working in the EUV near 174 Å. It is an elongated plasma blob structure of 25 Mm diameter moving above the east limb with coronal loops under. Summed and co-aligned SWAP images are evaluated using a 20-h sequence, in addition to the 11 July, 2010 eclipse W-L images taken from several sites. The Atmospheric Imaging Assembly (AIA) instrument on board the Solar Dynamics Observatory (SDO) recorded the event suggesting a magnetic reconnection near a high neutral point; accordingly, we also call it a magnetic plasmoid. The measured proper motion of the blob shows a velocity up to 12 km s^{-1}. Electron densities of the isolated condensation (cloud or blob or plasmoid) are photometrically evaluated. The typical value is 108 cm^{-3} at r=1.7 R_{⊙}, superposed on a background corona of 107 cm^{-3} density. The mass of the cloud near its maximum brightness is found to be 1.6×10^{13} g, which is typically 0.6×10^{-4} of the overall mass of the corona. From the extrapolated magnetic field the cloud evolves inside a rather broad open region but decelerates, after reaching its maximum brightness. The influence of such small events for supplying material to the ubiquitous slow wind is noticed. A precise evaluation of the EUV photometric data, after accurately removing the stray light, suggests an interpretation of the weak 174 Å radiation of the cloud as due to resonance scattering in the Fe IX/X lines.

  13. STScI-PRC98-38 GREAT BALLS OF FIRE! HUBBLE SEES BRIGHT KNOTS EJECTED FROM BRILLIANT STAR

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Resembling an aerial fireworks explosion, this dramatic NASA Hubble Space Telescope picture of the energetic star WR124 reveals it is surrounded by hot clumps of gas being ejected into space at speeds of over 100,000 miles per hour. Also remarkable are vast arcs of glowing gas around the star, which are resolved into filamentary, chaotic substructures, yet with no overall global shell structure. Though the existence of clumps in the winds of hot stars has been deduced through spectroscopic observations of their inner winds, Hubble resolves them directly in the nebula M1-67 around WR124 as 100 billion-mile wide glowing gas blobs. Each blob is about 30 times the mass of the Earth. The massive, hot central star is known as a Wolf-Rayet star. This extremely rare and short-lived class of super-hot star (in this case 50,000 degrees Kelvin) is going through a violent, transitional phase characterized by the fierce ejection of mass. The blobs may result from the furious stellar wind that does not flow smoothly into space but has instabilities which make it clumpy. The surrounding nebula is estimated to be no older than 10,000 years, which means that it is so young it has not yet slammed into the gasses comprising the surrounding interstellar medium. As the blobs cool they will eventually dissipate into space and so don't pose any threat to neighboring stars. The star is 15,000 light-years away, located in the constellation Sagittarius. The picture was taken with Hubble's Wide Field Planetary Camera 2 in March 1997. The image is false-colored to reveal details in the nebula's structure. Credit: Yves Grosdidier (University of Montreal and Observatoire de Strasbourg), Anthony Moffat (Universitie de Montreal), Gilles Joncas (Universite Laval), Agnes Acker (Observatoire de Strasbourg), and NASA

  14. ISM stripping from cluster galaxies and inhomogeneities in cooling flows

    NASA Technical Reports Server (NTRS)

    Soker, Noam; Bregman, Joel N.; Sarazin, Craig L.

    1990-01-01

    Analyses of the x ray surface brightness profiles of cluster cooling flows suggest that the mass flow rate decreases towards the center of the cluster. It is often suggested that this decrease results from thermal instabilities, in which denser blobs of gas cool rapidly and drop below x ray emitting temperatures. If the seeds for the thermal instabilities are entropy perturbations, these perturbations must enter the flow already in the nonlinear regime. Otherwise, the blobs would take too long to cool. Here, researchers suggest that such nonlinear perturbations might start as blobs of interstellar gas which are stripped out of cluster galaxies. Assuming that most of the gas produced by stellar mass loss in cluster galaxies is stripped from the galaxies, the total rate of such stripping is roughly M sub Interstellar Matter (ISM) approx. 100 solar mass yr(-1). It is interesting that the typical rates of cooling in cluster cooling flows are M sub cool approx. 100 solar mass yr(-1). Thus, it is possible that a substantial portion of the cooling gas originates as blobs of interstellar gas stripped from galaxies. The magnetic fields within and outside of the low entropy perturbations can help to maintain their identities, both by suppressing thermal conduction and through the dynamical effects of magnetic tension. One significant question concerning this scenario is: Why are cooling flows seen only in a fraction of clusters, although one would expect gas stripping to be very common. It may be that the density perturbations only survive and cool efficiently in clusters with a very high intracluster gas density and with the focusing effect of a central dominant galaxy. Inhomogeneities in the intracluster medium caused by the stripping of interstellar gas from galaxies can have a number of other effects on clusters. For example, these density fluctuations may disrupt the propagation of radio jets through the intracluster gas, and this may be one mechanism for producing Wide-Angle-Tail radio galaxies.

  15. On selecting satellite conjunction filter parameters

    NASA Astrophysics Data System (ADS)

    Alfano, Salvatore; Finkleman, David

    2014-06-01

    This paper extends concepts of signal detection theory to predict the performance of conjunction screening techniques and guiding the selection of keepout and screening thresholds. The most efficient way to identify satellites likely to collide is to employ filters to identify orbiting pairs that should not come close enough over a prescribed time period to be considered hazardous. Such pairings can then be eliminated from further computation to accelerate overall processing time. Approximations inherent in filtering techniques include screening using only unperturbed Newtonian two body astrodynamics and uncertainties in orbit elements. Therefore, every filtering process is vulnerable to including objects that are not threats and excluding some that are threats, Type I and Type II errors. The approach in this paper guides selection of the best operating point for the filters suited to a user's tolerance for false alarms and unwarned threats. We demonstrate the approach using three archetypal filters with an initial three-day span, select filter parameters based on performance, and then test those parameters using eight historical snapshots of the space catalog. This work provides a mechanism for selecting filter parameters but the choices depend on the circumstances.

  16. LinkImputeR: user-guided genotype calling and imputation for non-model organisms.

    PubMed

    Money, Daniel; Migicovsky, Zoë; Gardner, Kyle; Myles, Sean

    2017-07-10

    Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from NGS technologies. It enables the user to quickly and easily examine the effects of varying thresholds and filters on the number and quality of the resulting genotype calls. In this manner, users can decide on thresholds that are most suitable for their purposes. We show that LinkImputeR can significantly augment the value and utility of NGS data sets, especially in non-model organisms with poor genomic resources.

  17. Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2009-01-01

    An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.

  18. The Choice of the Filtering Method in Microarrays Affects the Inference Regarding Dosage Compensation of the Active X-Chromosome

    PubMed Central

    Zeller, Tanja; Wild, Philipp S.; Truong, Vinh; Trégouët, David-Alexandre; Munzel, Thomas; Ziegler, Andreas; Cambien, François; Blankenberg, Stefan; Tiret, Laurence

    2011-01-01

    Background The hypothesis of dosage compensation of genes of the X chromosome, supported by previous microarray studies, was recently challenged by RNA-sequencing data. It was suggested that microarray studies were biased toward an over-estimation of X-linked expression levels as a consequence of the filtering of genes below the detection threshold of microarrays. Methodology/Principal Findings To investigate this hypothesis, we used microarray expression data from circulating monocytes in 1,467 individuals. In total, 25,349 and 1,156 probes were unambiguously assigned to autosomes and the X chromosome, respectively. Globally, there was a clear shift of X-linked expressions toward lower levels than autosomes. We compared the ratio of expression levels of X-linked to autosomal transcripts (X∶AA) using two different filtering methods: 1. gene expressions were filtered out using a detection threshold irrespective of gene chromosomal location (the standard method in microarrays); 2. equal proportions of genes were filtered out separately on the X and on autosomes. For a wide range of filtering proportions, the X∶AA ratio estimated with the first method was not significantly different from 1, the value expected if dosage compensation was achieved, whereas it was significantly lower than 1 with the second method, leading to the rejection of the hypothesis of dosage compensation. We further showed in simulated data that the choice of the most appropriate method was dependent on biological assumptions regarding the proportion of actively expressed genes on the X chromosome comparative to the autosomes and the extent of dosage compensation. Conclusion/Significance This study shows that the method used for filtering out lowly expressed genes in microarrays may have a major impact according to the hypothesis investigated. The hypothesis of dosage compensation of X-linked genes cannot be firmly accepted or rejected using microarray-based data. PMID:21912656

  19. Retinal blood vessel extraction using tunable bandpass filter and fuzzy conditional entropy.

    PubMed

    Sil Kar, Sudeshna; Maity, Santi P

    2016-09-01

    Extraction of blood vessels on retinal images plays a significant role for screening of different opthalmologic diseases. However, accurate extraction of the entire and individual type of vessel silhouette from the noisy images with poorly illuminated background is a complicated task. To this aim, an integrated system design platform is suggested in this work for vessel extraction using a sequential bandpass filter followed by fuzzy conditional entropy maximization on matched filter response. At first noise is eliminated from the image under consideration through curvelet based denoising. To include the fine details and the relatively less thick vessel structures, the image is passed through a bank of sequential bandpass filter structure optimized for contrast enhancement. Fuzzy conditional entropy on matched filter response is then maximized to find the set of multiple optimal thresholds to extract the different types of vessel silhouettes from the background. Differential Evolution algorithm is used to determine the optimal gain in bandpass filter and the combination of the fuzzy parameters. Using the multiple thresholds, retinal image is classified as the thick, the medium and the thin vessels including neovascularization. Performance evaluated on different publicly available retinal image databases shows that the proposed method is very efficient in identifying the diverse types of vessels. Proposed method is also efficient in extracting the abnormal and the thin blood vessels in pathological retinal images. The average values of true positive rate, false positive rate and accuracy offered by the method is 76.32%, 1.99% and 96.28%, respectively for the DRIVE database and 72.82%, 2.6% and 96.16%, respectively for the STARE database. Simulation results demonstrate that the proposed method outperforms the existing methods in detecting the various types of vessels and the neovascularization structures. The combination of curvelet transform and tunable bandpass filter is found to be very much effective in edge enhancement whereas fuzzy conditional entropy efficiently distinguishes vessels of different widths. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble

    NASA Astrophysics Data System (ADS)

    Yongye, Austin B.; Bender, Andreas; Martínez-Mayorga, Karina

    2010-08-01

    Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged- RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged- RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1-4), medium (5-9) and high (10-15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments.

  1. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    PubMed

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  2. Real time tracking by LOPF algorithm with mixture model

    NASA Astrophysics Data System (ADS)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  3. Data base manipulation for assessment of multiresource suitability and land change

    NASA Technical Reports Server (NTRS)

    Colwell, J.; Sanders, P.; Davis, G.; Thomson, F. (Principal Investigator)

    1981-01-01

    Progress is reported in three tasks which support the overall objectives of renewable resources inventory task of the AgRISTARS program. In the first task, the geometric correction algorithms of the Master Data Processor were investigated to determine the utility of data corrected by this processor for U.S. Forest Service uses. The second task involved investigation of logic to form blobs as a precursor step to automatic change detection involving two dates of LANDSAT data. Some routine procedures for selecting BLOB (spatial averaging) parameters were developed. In the third task, a major effort was made to develop land suitability modeling approches for timber, grazing, and wildlife habitat in support of resource planning efforts on the San Juan National Forest.

  4. Filament velocity scaling laws for warm ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manz, P.; Max-Planck-Institut für Plasmaphysik, EURATOM Assoziation, Boltzmannstr. 2, 85748 Garching; Carralero, D.

    2013-10-15

    The dynamics of filaments or blobs in the scrape-off layer of magnetic fusion devices are studied by magnitude estimates of a comprehensive drift-interchange-Alfvén fluid model. The standard blob models are reproduced in the cold ion case. Even though usually neglected, in the scrape-off layer, the ion temperature can exceed the electron temperature by an order of magnitude. The ion pressure affects the dynamics of filaments amongst others by adding up to the interchange drive and the polarisation current. It is shown how both effects modify the scaling laws for filament velocity in dependence of its size. Simplifications for experimentally relevantmore » limit regimes are given. These are the sheath dissipation, collisional, and electromagnetic regime.« less

  5. A Data Quality Filter for PMU Measurements: Description, Experience, and Examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Follum, James D.; Amidan, Brett G.

    Networks of phasor measurement units (PMUs) continue to grow, and along with them, the amount of data available for analysis. With so much data, it is impractical to identify and remove poor quality data manually. The data quality filter described in this paper was developed for use with the Data Integrity and Situation Awareness Tool (DISAT), which analyzes PMU data to identify anomalous system behavior. The filter operates based only on the information included in the data files, without supervisory control and data acquisition (SCADA) data, state estimator values, or system topology information. Measurements are compared to preselected thresholds tomore » determine if they are reliable. Along with the filter's description, examples of data quality issues from application of the filter to nine months of archived PMU data are provided. The paper is intended to aid the reader in recognizing and properly addressing data quality issues in PMU data.« less

  6. Constructing financial network based on PMFG and threshold method

    NASA Astrophysics Data System (ADS)

    Nie, Chun-Xiao; Song, Fu-Tie

    2018-04-01

    Based on planar maximally filtered graph (PMFG) and threshold method, we introduced a correlation-based network named PMFG-based threshold network (PTN). We studied the community structure of PTN and applied ISOMAP algorithm to represent PTN in low-dimensional Euclidean space. The results show that the community corresponds well to the cluster in the Euclidean space. Further, we studied the dynamics of the community structure and constructed the normalized mutual information (NMI) matrix. Based on the real data in the market, we found that the volatility of the market can lead to dramatic changes in the community structure, and the structure is more stable during the financial crisis.

  7. Systematic Biological Filter Design with a Desired I/O Filtering Response Based on Promoter-RBS Libraries.

    PubMed

    Hsu, Chih-Yuan; Pan, Zhen-Ming; Hu, Rei-Hsing; Chang, Chih-Chun; Cheng, Hsiao-Chun; Lin, Che; Chen, Bor-Sen

    2015-01-01

    In this study, robust biological filters with an external control to match a desired input/output (I/O) filtering response are engineered based on the well-characterized promoter-RBS libraries and a cascade gene circuit topology. In the field of synthetic biology, the biological filter system serves as a powerful detector or sensor to sense different molecular signals and produces a specific output response only if the concentration of the input molecular signal is higher or lower than a specified threshold. The proposed systematic design method of robust biological filters is summarized into three steps. Firstly, several well-characterized promoter-RBS libraries are established for biological filter design by identifying and collecting the quantitative and qualitative characteristics of their promoter-RBS components via nonlinear parameter estimation method. Then, the topology of synthetic biological filter is decomposed into three cascade gene regulatory modules, and an appropriate promoter-RBS library is selected for each module to achieve the desired I/O specification of a biological filter. Finally, based on the proposed systematic method, a robust externally tunable biological filter is engineered by searching the promoter-RBS component libraries and a control inducer concentration library to achieve the optimal reference match for the specified I/O filtering response.

  8. Segmentation of Retinal Blood Vessels Based on Cake Filter

    PubMed Central

    Bao, Xi-Rong; Ge, Xin; She, Li-Huang; Zhang, Shi

    2015-01-01

    Segmentation of retinal blood vessels is significant to diagnosis and evaluation of ocular diseases like glaucoma and systemic diseases such as diabetes and hypertension. The retinal blood vessel segmentation for small and low contrast vessels is still a challenging problem. To solve this problem, a new method based on cake filter is proposed. Firstly, a quadrature filter band called cake filter band is made up in Fourier field. Then the real component fusion is used to separate the blood vessel from the background. Finally, the blood vessel network is got by a self-adaption threshold. The experiments implemented on the STARE database indicate that the new method has a better performance than the traditional ones on the small vessels extraction, average accuracy rate, and true and false positive rate. PMID:26636095

  9. Complementary theta resonance filtering by two spatially segregated mechanisms in CA1 hippocampal pyramidal neurons.

    PubMed

    Hu, Hua; Vervaeke, Koen; Graham, Lyle J; Storm, Johan F

    2009-11-18

    Synaptic input to a neuron may undergo various filtering steps, both locally and during transmission to the soma. Using simultaneous whole-cell recordings from soma and apical dendrites from rat CA1 hippocampal pyramidal cells, and biophysically detailed modeling, we found two complementary resonance (bandpass) filters of subthreshold voltage signals. Both filters favor signals in the theta (3-12 Hz) frequency range, but have opposite location, direction, and voltage dependencies: (1) dendritic H-resonance, caused by h/HCN-channels, filters signals propagating from soma to dendrite when the membrane potential is close to rest; and (2) somatic M-resonance, caused by M/Kv7/KCNQ and persistent Na(+) (NaP) channels, filters signals propagating from dendrite to soma when the membrane potential approaches spike threshold. Hippocampal pyramidal cells participate in theta network oscillations during behavior, and we suggest that that these dual, polarized theta resonance mechanisms may convey voltage-dependent tuning of theta-mediated neural coding in the entorhinal/hippocampal system during locomotion, spatial navigation, memory, and sleep.

  10. Evaluation of spatial filtering on the accuracy of wheat area estimate

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Delima, A. M.

    1982-01-01

    A 3 x 3 pixel spatial filter for postclassification was used for wheat classification to evaluate the effects of this procedure on the accuracy of area estimation using LANDSAT digital data obtained from a single pass. Quantitative analyses were carried out in five test sites (approx 40 sq km each) and t tests showed that filtering with threshold values significantly decreased errors of commission and omission. In area estimation filtering improved the overestimate of 4.5% to 2.7% and the root-mean-square error decreased from 126.18 ha to 107.02 ha. Extrapolating the same procedure of automatic classification using spatial filtering for postclassification to the whole study area, the accuracy in area estimate was improved from the overestimate of 10.9% to 9.7%. It is concluded that when single pass LANDSAT data is used for crop identification and area estimation the postclassification procedure using a spatial filter provides a more accurate area estimate by reducing classification errors.

  11. Development of a Voice Activity Controlled Noise Canceller

    PubMed Central

    Abid Noor, Ali O.; Samad, Salina Abdul; Hussain, Aini

    2012-01-01

    In this paper, a variable threshold voice activity detector (VAD) is developed to control the operation of a two-sensor adaptive noise canceller (ANC). The VAD prohibits the reference input of the ANC from containing some strength of actual speech signal during adaptation periods. The novelty of this approach resides in using the residual output from the noise canceller to control the decisions made by the VAD. Thresholds of full-band energy and zero-crossing features are adjusted according to the residual output of the adaptive filter. Performance evaluation of the proposed approach is quoted in terms of signal to noise ratio improvements as well mean square error (MSE) convergence of the ANC. The new approach showed an improved noise cancellation performance when tested under several types of environmental noise. Furthermore, the computational power of the adaptive process is reduced since the output of the adaptive filter is efficiently calculated only during non-speech periods. PMID:22778667

  12. Psychometric functions for informational masking

    NASA Astrophysics Data System (ADS)

    Lutfi, Robert A.; Kistler, Doris J.; Callahan, Michael R.; Wightman, Frederic L.

    2003-04-01

    The method of constant stimuli was used to obtain complete psychometric functions (PFs) from 44 normal-hearing listeners in conditions known to produce varying amounts of informational masking. The task was to detect a pure-tone signal in the presence of a broadband noise and in the presence of multitone maskers with frequencies and amplitudes that varied at random from one presentation to the next. Relative to the broadband noise condition, significant reductions were observed in both the slope and the upper asymptote of the PF for multitone maskers producing large amounts of informational masking. Slope was affected more for some listeners while asymptote was affected more for others. Mean slopes and asymptotes varied nonmonotonically with the number of masker components in much the same manner as mean thresholds. The results are consistent with a model that assumes trial-by-trial judgments are based on a weighted sum of dB levels at the output of independent auditory filters. For many listeners, however, the weights appear to reflect how often a nonsignal auditory filter is mistaken for the signal filter. For these listeners adaptive procedures may produce a significant bias in the estimates of threshold for conditions of informational masking. [Work supported by NIDCD.

  13. Feasibility of spectral CT imaging for the detection of liver lesions with gold-based contrast agents - A simulation study.

    PubMed

    Müllner, Marie; Schlattl, Helmut; Hoeschen, Christoph; Dietrich, Olaf

    2015-12-01

    To demonstrate the feasibility of gold-specific spectral CT imaging for the detection of liver lesions in humans at low concentrations of gold as targeted contrast agent. A Monte Carlo simulation study of spectral CT imaging with a photon-counting and energy-resolving detector (with 6 energy bins) was performed in a realistic phantom of the human abdomen. The detector energy thresholds were optimized for the detection of gold. The simulation results were reconstructed with the K-edge imaging algorithm; the reconstructed gold-specific images were filtered and evaluated with respect to signal-to-noise ratio and contrast-to-noise ratio (CNR). The simulations demonstrate the feasibility of spectral CT with CNRs of the specific gold signal between 2.7 and 4.8 after bilateral filtering. Using the optimized bin thresholds increases the CNRs of the lesions by up to 23% compared to bin thresholds described in former studies. Gold is a promising new CT contrast agent for spectral CT in humans; minimum tissue mass fractions of 0.2 wt% of gold are required for sufficient image contrast. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Topological Characteristics of the Hong Kong Stock Market: A Test-based P-threshold Approach to Understanding Network Complexity

    PubMed Central

    Xu, Ronghua; Wong, Wing-Keung; Chen, Guanrong; Huang, Shuo

    2017-01-01

    In this paper, we analyze the relationship among stock networks by focusing on the statistically reliable connectivity between financial time series, which accurately reflects the underlying pure stock structure. To do so, we firstly filter out the effect of market index on the correlations between paired stocks, and then take a t-test based P-threshold approach to lessening the complexity of the stock network based on the P values. We demonstrate the superiority of its performance in understanding network complexity by examining the Hong Kong stock market. By comparing with other filtering methods, we find that the P-threshold approach extracts purely and significantly correlated stock pairs, which reflect the well-defined hierarchical structure of the market. In analyzing the dynamic stock networks with fixed-size moving windows, our results show that three global financial crises, covered by the long-range time series, can be distinguishingly indicated from the network topological and evolutionary perspectives. In addition, we find that the assortativity coefficient can manifest the financial crises and therefore can serve as a good indicator of the financial market development. PMID:28145494

  15. The application of the detection filter to aircraft control surface and actuator failure detection and isolation

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Motyka, P.; Hall, S. R.

    1985-01-01

    The performance of the detection filter in detecting and isolating aircraft control surface and actuator failures is evaluated. The basic detection filter theory assumption of no direct input-output coupling is violated in this application due to the use of acceleration measurements for detecting and isolating failures. With this coupling, residuals produced by control surface failures may only be constrained to a known plane rather than to a single direction. A detection filter design with such planar failure signatures is presented, with the design issues briefly addressed. In addition, a modification to constrain the residual to a single known direction even with direct input-output coupling is also presented. Both the detection filter and the modification are tested using a nonlinear aircraft simulation. While no thresholds were selected, both filters demonstrated an ability to detect control surface and actuator failures. Failure isolation may be a problem if there are several control surfaces which produce similar effects on the aircraft. In addition, the detection filter was sensitive to wind turbulence and modeling errors.

  16. Blending of phased array data

    NASA Astrophysics Data System (ADS)

    Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno

    2018-04-01

    The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.

  17. ILS Glide Slope Standards. Part 2. Validation of Proposed Flight Inspection Filter Systems, and Responses of Simulated Aircraft on Coupled Approaches

    DTIC Science & Technology

    1975-10-01

    If A-5 (ItA) gym IpA) Em (jpA) I point 8 POWntC TIME (me) Threshold - Figure A-1, Reapauise of Filter Sytm No. 2 to Prototype (aIide Slope Fat No. 1...II A-51 (ft (,sA) (pA) (pA) * ICt . IMDAt TO Cot A DH TIME~)FFareJ Figura A-21. Responses of the CV-880 Aircraft wil~h LSI Automatic Landing System and

  18. Brain stem auditory potentials evoked by clicks in the presence of high-pass filtered noise in dogs.

    PubMed

    Poncelet, L; Deltenre, P; Coppens, A; Michaux, C; Coussart, E

    2006-04-01

    This study evaluates the effects of a high-frequency hearing loss simulated by the high-pass-noise masking method, on the click-evoked brain stem-evoked potentials (BAEP) characteristics in dogs. BAEP were obtained in response to rarefaction and condensation click stimuli from 60 dB normal hearing level (NHL, corresponding to 89 dB sound pressure level) to wave V threshold, using steps of 5 dB in eleven 58 to 80-day-old Beagle puppies. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation potential (RCDP). The procedure was repeated while constant level, high-pass filtered (HPF) noise was superposed to the click. Cut-off frequencies of the successively used filters were 8, 4, 2 and 1 kHz. For each condition, wave V and RCDP thresholds, and slope of the wave V latency-intensity curve (LIC) were collected. The intensity range at which RCDP could not be recorded (pre-RCDP range) was calculated. Compared with the no noise condition, the pre-RCDP range significantly diminished and the wave V threshold significantly increased when the superposed HPF noise reached the 4 kHz area. Wave V LIC slope became significantly steeper with the 2 kHz HPF noise. In this non-invasive model of high-frequency hearing loss, impaired hearing of frequencies from 8 kHz and above escaped detection through click BAEP study in dogs. Frequencies above 13 kHz were however not specifically addressed in this study.

  19. Nonlinear Filtering Effects of Reservoirs on Flood Frequency Curves at the Regional Scale: RESERVOIRS FILTER FLOOD FREQUENCY CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wei; Li, Hong-Yi; Leung, L. Ruby

    Anthropogenic activities, e.g., reservoir operation, may alter the characteristics of Flood Frequency Curve (FFC) and challenge the basic assumption of stationarity used in flood frequency analysis. This paper presents a combined data-modeling analysis of the nonlinear filtering effects of reservoirs on the FFCs over the contiguous United States. A dimensionless Reservoir Impact Index (RII), defined as the total upstream reservoir storage capacity normalized by the annual streamflow volume, is used to quantify reservoir regulation effects. Analyses are performed for 388 river stations with an average record length of 50 years. The first two moments of the FFC, mean annual maximummore » flood (MAF) and coefficient of variations (CV), are calculated for the pre- and post-dam periods and compared to elucidate the reservoir regulation effects as a function of RII. It is found that MAF generally decreases with increasing RII but stabilizes when RII exceeds a threshold value, and CV increases with RII until a threshold value beyond which CV decreases with RII. The processes underlying the nonlinear threshold behavior of MAF and CV are investigated using three reservoir models with different levels of complexity. All models capture the non-linear relationships of MAF and CV with RII, suggesting that the basic flood control function of reservoirs is key to the non-linear relationships. The relative roles of reservoir storage capacity, operation objectives, available storage prior to a flood event, and reservoir inflow pattern are systematically investigated. Our findings may help improve flood-risk assessment and mitigation in regulated river systems at the regional scale.« less

  20. A minimally-resolved immersed boundary model for reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar

    2013-12-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.

  1. Pedestal and edge electrostatic turbulence characteristics from an XGC1 gyrokinetic simulation

    DOE PAGES

    Churchill, R. M.; Chang, C. S.; Ku, S.; ...

    2017-08-30

    Understanding the multi-scale neoclassical and turbulence physics in the edge region (pedestal + scrape-off layer (SOL)) is required in order to reliably predict performance in future fusion devices. We explore turbulent characteristics in the edge region from a multi-scale neoclassical and turbulent XGC1 gyrokinetic simulation in a DIII-D like tokamak geometry, here excluding neutrals and collisions. For an H-mode type plasma with steep pedestal, it is found that the electron density fluctuations increase towards the separatrix, and stay high well into the SOL, reaching a maximum value ofmore » $$\\delta {n}_{e}/{\\bar{n}}_{e}\\sim 0.18$$. Blobs are observed, born around the magnetic separatrix surface and propagate radially outward with velocities generally less than 1 km s –1. Strong poloidal motion of the blobs is also present, near 20 km s –1, consistent with E × B rotation. The electron density fluctuations show a negative skewness in the closed field-line pedestal region, consistent with the presence of 'holes', followed by a transition to strong positive skewness across the separatrix and into the SOL. These simulations indicate that not only neoclassical phenomena, but also turbulence, including the blob-generation mechanism, can remain important in the steep H-mode pedestal and SOL. Lastly, qualitative comparisons will be made to experimental observations.« less

  2. Scale-space for empty catheter segmentation in PCI fluoroscopic images.

    PubMed

    Bacchuwar, Ketan; Cousty, Jean; Vaillant, Régis; Najman, Laurent

    2017-07-01

    In this article, we present a method for empty guiding catheter segmentation in fluoroscopic X-ray images. The guiding catheter, being a commonly visible landmark, its segmentation is an important and a difficult brick for Percutaneous Coronary Intervention (PCI) procedure modeling. In number of clinical situations, the catheter is empty and appears as a low contrasted structure with two parallel and partially disconnected edges. To segment it, we work on the level-set scale-space of image, the min tree, to extract curve blobs. We then propose a novel structural scale-space, a hierarchy built on these curve blobs. The deep connected component, i.e. the cluster of curve blobs on this hierarchy, that maximizes the likelihood to be an empty catheter is retained as final segmentation. We evaluate the performance of the algorithm on a database of 1250 fluoroscopic images from 6 patients. As a result, we obtain very good qualitative and quantitative segmentation performance, with mean precision and recall of 80.48 and 63.04% respectively. We develop a novel structural scale-space to segment a structured object, the empty catheter, in challenging situations where the information content is very sparse in the images. Fully-automatic empty catheter segmentation in X-ray fluoroscopic images is an important and preliminary step in PCI procedure modeling, as it aids in tagging the arrival and removal location of other interventional tools.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Ricardo Maqueda; Dr. Fred M. Levinton

    Nova Photonics, Inc. has a collaborative effort at the National Spherical Torus Experiment (NSTX). This collaboration, based on fast imaging of visible phenomena, has provided key insights on edge turbulence, intermittency, and edge phenomena such as edge localized modes (ELMs) and multi-faceted axisymmetric radiation from the edge (MARFE). Studies have been performed in all these areas. The edge turbulence/intermittency studies make use of the Gas Puff Imaging diagnostic developed by the Principal Investigator (Ricardo Maqueda) together with colleagues from PPPL. This effort is part of the International Tokamak Physics Activity (ITPA) edge, scrape-off layer and divertor group joint activity (DSOL-15:more » Inter-machine comparison of blob characteristics). The edge turbulence/blob study has been extended from the current location near the midplane of the device to the lower divertor region of NSTX. The goal of this effort was to study turbulence born blobs in the vicinity of the X-point region and their circuit closure on divertor sheaths or high density regions in the divertor. In the area of ELMs and MARFEs we have studied and characterized the mode structure and evolution of the ELM types observed in NSTX, as well as the study of the observed interaction between MARFEs and ELMs. This interaction could have substantial implications for future devices where radiative divertor regions are required to maintain detachment from the divertor plasma facing components.« less

  4. Evaporating Spray in Supersonic Streams Including Turbulence Effects

    NASA Technical Reports Server (NTRS)

    Balasubramanyam, M. S.; Chen, C. P.

    2006-01-01

    Evaporating spray plays an important role in spray combustion processes. This paper describes the development of a new finite-conductivity evaporation model, based on the two-temperature film theory, for two-phase numerical simulation using Eulerian-Lagrangian method. The model is a natural extension of the T-blob/T-TAB atomization/spray model which supplies the turbulence characteristics for estimating effective thermal diffusivity within the droplet phase. Both one-way and two-way coupled calculations were performed to investigate the performance of this model. Validation results indicate the superiority of the finite-conductivity model in low speed parallel flow evaporating sprays. High speed cross flow spray results indicate the effectiveness of the T-blob/T-TAB model and point to the needed improvements in high speed evaporating spray modeling.

  5. Improving ontology matching with propagation strategy and user feedback

    NASA Astrophysics Data System (ADS)

    Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu

    2015-07-01

    Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.

  6. Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.

    PubMed

    Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang

    2016-10-10

    In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.

  7. Kv1 channels control spike threshold dynamics and spike timing in cortical pyramidal neurones

    PubMed Central

    Higgs, Matthew H; Spain, William J

    2011-01-01

    Abstract Previous studies showed that cortical pyramidal neurones (PNs) have a dynamic spike threshold that functions as a high-pass filter, enhancing spike timing in response to high-frequency input. While it is commonly assumed that Na+ channel inactivation is the primary mechanism of threshold accommodation, the possible role of K+ channel activation in fast threshold changes has not been well characterized. The present study tested the hypothesis that low-voltage activated Kv1 channels affect threshold dynamics in layer 2–3 PNs, using α-dendrotoxin (DTX) or 4-aminopyridine (4-AP) to block these conductances. We found that Kv1 blockade reduced the dynamic changes of spike threshold in response to a variety of stimuli, including stimulus-evoked synaptic input, current steps and ramps of varied duration, and noise. Analysis of the responses to noise showed that Kv1 channels increased the coherence of spike output with high-frequency components of the stimulus. A simple model demonstrates that a dynamic spike threshold can account for this effect. Our results show that the Kv1 conductance is a major mechanism that contributes to the dynamic spike threshold and precise spike timing of cortical PNs. PMID:21911608

  8. An effect size filter improves the reproducibility in spectral counting-based comparative proteomics.

    PubMed

    Gregori, Josep; Villarreal, Laura; Sánchez, Alex; Baselga, José; Villanueva, Josep

    2013-12-16

    The microarray community has shown that the low reproducibility observed in gene expression-based biomarker discovery studies is partially due to relying solely on p-values to get the lists of differentially expressed genes. Their conclusions recommended complementing the p-value cutoff with the use of effect-size criteria. The aim of this work was to evaluate the influence of such an effect-size filter on spectral counting-based comparative proteomic analysis. The results proved that the filter increased the number of true positives and decreased the number of false positives and the false discovery rate of the dataset. These results were confirmed by simulation experiments where the effect size filter was used to evaluate systematically variable fractions of differentially expressed proteins. Our results suggest that relaxing the p-value cut-off followed by a post-test filter based on effect size and signal level thresholds can increase the reproducibility of statistical results obtained in comparative proteomic analysis. Based on our work, we recommend using a filter consisting of a minimum absolute log2 fold change of 0.8 and a minimum signal of 2-4 SpC on the most abundant condition for the general practice of comparative proteomics. The implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of the results obtained among independent laboratories and MS platforms. Quality control analysis of microarray-based gene expression studies pointed out that the low reproducibility observed in the lists of differentially expressed genes could be partially attributed to the fact that these lists are generated relying solely on p-values. Our study has established that the implementation of an effect size post-test filter improves the statistical results of spectral count-based quantitative proteomics. The results proved that the filter increased the number of true positives whereas decreased the false positives and the false discovery rate of the datasets. The results presented here prove that a post-test filter applying a reasonable effect size and signal level thresholds helps to increase the reproducibility of statistical results in comparative proteomic analysis. Furthermore, the implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of results obtained among independent laboratories and MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Bioconvective Plumes and Bacterial Self-Concentration at a Slanting Meniscus

    NASA Astrophysics Data System (ADS)

    Dombrowski, Christopher; Chatkaew, Sunita; Goldstein, Raymond; Kessler, John

    2004-03-01

    Aerobic bacteria, e.g. Bacillus subtilis, consume oxygen. For populations of ˜ 10^9 cells/cm^3, volume fraction ˜ 0.001, the resultant oxygen deficit results in the creation of an oxygen concentration gradient due to influx from air/fluid interfaces. The bacteria swim up that gradient. Since the cells are denser than water by ˜ 10%, the mean density of the supension increases proportionally to cell concentration, producing unstable stratification, sinking blobs, plumes and the like. This well known effect is constructively modified when the oxygen-supplying interface is not perpendicular to gravity. In that case, where there is no threshold to collective instability, the organisms near the interface descend along it, usually in the form of plumes. This phenomenon is somewhat analogous to the Boycott Effect in sedimentation, in which tilting the chamber walls results in large-scale flows. Data on such curved interfaces, achieved at corners, and with droplets of suspension, sessile and pendant, will be presented. The practical significance of the phenomenon is self-concentration of bacteria, in nature and in the laboratory. We shall also present insights derived from a mathematical model and computer simulations.

  10. Zooming into the Paraná-Etendeka silicic volcanics, southern Brasil: a physical volcanological approach

    NASA Astrophysics Data System (ADS)

    Gualda, G. A. R.; Gravley, D. M.; Harmon, L. J.; Tramontano, S.; Luchetti, A. C. F.; Nardy, A.

    2015-12-01

    Paraná-Etendeka volcanism led to the opening of the Atlantic Ocean during the early Cretaceous. Most Paraná research has focused on the regional scale geochemistry and geochronology. Complementarily, we have taken a physical volcanological approach to elucidate the styles and locations of silicic eruptions with a focus on extrusive vs. explosive varieties, and an ultimate goal to characterise the crustal magmatic conditions. Through satellite to microscopic observations we can zoom from volcanic edifice and deposit morphologies, remarkably preserved in the Mesozoic landscape, to primary microscopic textures. Lava domes appear in clusters with high relief and are surrounded by lower flat-topped terraces comprised of multiple tabular-shaped packages with conspicuous horizontal jointing. Joint thickness coincides with layering from mm-scale laminations to larger lens-shaped blobs up to 20 cm thick and more than a metre long. These layered deposits appear to be compressed and/or stretched into the finer laminations and grade up into the fatter lens-shaped blobs. In other regions, extensive plateaus dominate the landscape with flat-lying flow packages continuous over 10's of kilometres and possibly further. Rheomorphism is evident in places with sub-parallel joints that grade up into a zone of deformation where curvilinear to overturned joint patterns reflect lateral forcing in a more ductile flow regime. Microscopically the blobs and surrounding matrix are almost indistinguishable except for subtle differences in spherulite textures, zonal alteration and distribution of crystal sizes. Although our research is relatively nascent, our observations suggest eruptions may have ranged from edifice building effusive ones to more explosive ones, albeit possibly relatively low fire fountains feeding hybridised lava/pyroclastic flows. Some of these flows are extensive, tens to possibly hundreds of kilometres long, consistent with high eruption rates of hot magma. These interpretations are consistent with published temperatures as hot as 1050 degrees for these silicic magmas. Preliminary work focusing on glass compositions and coexisting phase assemblages within the blobs reveals that silicic magmas resided in the shallow crust prior to eruption.

  11. Separation of ice crystals from interstitial aerosol particles using virtual impaction at the Fifth International Ice Nucleation Workshop FIN-3

    NASA Astrophysics Data System (ADS)

    Roesch, M.; Garimella, S.; Roesch, C.; Zawadowicz, M. A.; Katich, J. M.; Froyd, K. D.; Cziczo, D. J.

    2016-12-01

    In this study, a parallel-plate ice chamber, the SPectrometer for Ice Nuclei (SPIN, DMT Inc.) was combined with a pumped counterflow virtual impactor (PCVI, BMI Inc.) to separate ice crystals from interstitial aerosol particles by their aerodynamic size. These measurements were part of the FIN-3 workshop, which took place in fall 2015 at Storm Peak Laboratory (SPL), a high altitude mountain top facility (3220 m m.s.l.) in the Rocky Mountains. The investigated particles were sampled from ambient air and were exposed to cirrus-like conditions inside SPIN (-40°C, 130% RHice). Previous SPIN experiments under these conditions showed that ice crystals were found to be in the super-micron range. Connected to the outlet of the ice chamber, the PCVI was adjusted to separate all particulates aerodynamically larger than 3.5 micrometer to the sample flow while smaller ones were rejected and removed by a pump flow. Using this technique reduces the number of interstitial aerosol particles, which could bias subsequent ice nucleating particle (INP) analysis. Downstream of the PCVI, the separated ice crystals were evaporated and the flow with the remaining INPs was split up to a particle analysis by laser mass spectrometry (PALMS) instrument a laser aerosol spectrometer (LAS, TSI Inc.) and a single particle soot photometer (SP2, DMT Inc.). Based on the sample flow and the resolution of the measured particle data, the lowest concentration threshold for the SP2 instrument was 294 INP L-1 and for the LAS instrument 60 INP L-1. Applying these thresholds as filters to the measured PALMS time series 944 valid INP spectra using the SP2 threshold and 445 valid INP spectra using the LAS threshold were identified. A sensitivity study determining the number of good INP spectra as a function of the filter threshold concentration showed a two-phase linear growth when increasing the threshold concentration showing a breakpoint around 100 INP L-1.

  12. UltiMatch-NL: A Web Service Matchmaker Based on Multiple Semantic Filters

    PubMed Central

    Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba

    2014-01-01

    In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters. PMID:25157872

  13. SME filter approach to multiple target tracking with false and missing measurements

    NASA Astrophysics Data System (ADS)

    Lee, Yong J.; Kamen, Edward W.

    1993-10-01

    The symmetric measurement equation (SME) filter for track maintenance in multiple target tracking is extended to the general case when there are an arbitrary unknown number of false and missing position measurements in the measurement set at any time point. It is assumed that the number N of targets is known a priori and that the target motions consist of random perturbations of constant-velocity trajectories. The key idea in the paper is to generate a new measurement vector from sums-of-products of the elements of 'feasible' N-element data vectors that pass a thresholding operation in the sums-of-products framework. Via this construction, the data association problem is completely avoided, and in addition, there is no need to identify which target measurements may correspond to false returns or which target measurements may be missing. A computer simulation of SME filter performance is given, including a comparison with the associated filter (a benchmark) and the joint probabilistic data association (JPDA) filter.

  14. UltiMatch-NL: a Web service matchmaker based on multiple semantic filters.

    PubMed

    Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba

    2014-01-01

    In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters.

  15. Characteristics of polar coronal hole jets

    NASA Astrophysics Data System (ADS)

    Chandrashekhar, K.; Bemporad, A.; Banerjee, D.; Gupta, G. R.; Teriaca, L.

    2014-01-01

    Context. High spatial- and temporal-resolution images of coronal hole regions show a dynamical environment where mass flows and jets are frequently observed. These jets are believed to be important for the coronal heating and the acceleration of the fast solar wind. Aims: We studied the dynamics of two jets seen in a polar coronal hole with a combination of imaging from EIS and XRT onboard Hinode. We observed drift motions related to the evolution and formation of these small-scale jets, which we tried to model as well. Methods: Stack plots were used to find the drift and flow speeds of the jets. A toymodel was developed by assuming that the observed jet is generated by a sequence of single reconnection events where single unresolved blobs of plasma are ejected along open field lines, then expand and fall back along the same path, following a simple ballistic motion. Results: We found observational evidence that supports the idea that polar jets are very likely produced by multiple small-scale reconnections occurring at different times in different locations. These eject plasma blobs that flow up and down with a motion very similar to a simple ballistic motion. The associated drift speed of the first jet is estimated to be ≈27 km s-1. The average outward speed of the first jet is ≈171 km s-1, well below the escape speed, hence if simple ballistic motion is considered, the plasma will not escape the Sun. The second jet was observed in the south polar coronal hole with three XRT filters, namely, C-poly, Al-poly, and Al-mesh filters. Many small-scale (≈3″-5″) fast (≈200-300 km s-1) ejections of plasma were observed on the same day; they propagated outwards. We observed that the stronger jet drifted at all altitudes along the jet with the same drift speed of ≃7 km s-1. We also observed that the bright point associated with the first jet is a part of sigmoid structure. The time of appearance of the sigmoid and that of the ejection of plasma from the bright point suggest that the sigmoid is the progenitor of the jet. Conclusions: The enhancement in the light curves of low-temperature EIS lines in the later phase of the jet lifetime and the shape of the jet's stack plots suggests that the jet material falls back, and most likely cools down. To further support this conclusion, the observed drifts were interpreted within a scenario where reconnection progressively shifts along a magnetic structure, leading to the sequential appearance of jets of about the same size and physical characteristics. On this basis, we also propose a simple qualitative model that mimics the observations. Movies 1-3 are available in electronic form at http://www.aanda.org Warning, no authors found for 2014A&A...561A..97.

  16. Adversarial Threshold Neural Computer for Molecular de Novo Design.

    PubMed

    Putin, Evgeny; Asadulaev, Arip; Vanhaelen, Quentin; Ivanenkov, Yan; Aladinskaya, Anastasia V; Aliper, Alex; Zhavoronkov, Alex

    2018-03-30

    In this article, we propose the deep neural network Adversarial Threshold Neural Computer (ATNC). The ATNC model is intended for the de novo design of novel small-molecule organic structures. The model is based on generative adversarial network architecture and reinforcement learning. ATNC uses a Differentiable Neural Computer as a generator and has a new specific block, called adversarial threshold (AT). AT acts as a filter between the agent (generator) and the environment (discriminator + objective reward functions). Furthermore, to generate more diverse molecules we introduce a new objective reward function named Internal Diversity Clustering (IDC). In this work, ATNC is tested and compared with the ORGANIC model. Both models were trained on the SMILES string representation of the molecules, using four objective functions (internal similarity, Muegge druglikeness filter, presence or absence of sp 3 -rich fragments, and IDC). The SMILES representations of 15K druglike molecules from the ChemDiv collection were used as a training data set. For the different functions, ATNC outperforms ORGANIC. Combined with the IDC, ATNC generates 72% of valid and 77% of unique SMILES strings, while ORGANIC generates only 7% of valid and 86% of unique SMILES strings. For each set of molecules generated by ATNC and ORGANIC, we analyzed distributions of four molecular descriptors (number of atoms, molecular weight, logP, and tpsa) and calculated five chemical statistical features (internal diversity, number of unique heterocycles, number of clusters, number of singletons, and number of compounds that have not been passed through medicinal chemistry filters). Analysis of key molecular descriptors and chemical statistical features demonstrated that the molecules generated by ATNC elicited better druglikeness properties. We also performed in vitro validation of the molecules generated by ATNC; results indicated that ATNC is an effective method for producing hit compounds.

  17. The effect of laser ablation parameters on optical limiting properties of silver nanoparticles

    NASA Astrophysics Data System (ADS)

    Gursoy, Irmak; Yaglioglu, Halime Gul

    2017-09-01

    This paper presents the effect of laser ablation parameters on optical limiting properties of silver nanoparticles. The current applications of lasers such as range finding, guidance, detection, illumination and designation have increased the potential of damaging optical imaging systems or eyes temporary or permanently. The applications of lasers introduce risks for sensors or eyes, when laser power is higher than damage threshold of the detection system. There are some ways to protect these systems such as neutral density (nd) filters, shutters, etc. However, these limiters reduce the total amount of light that gets into the system. Also, response time of these limiters may not be fast enough to prevent damage and cause precipitation in performance due to deprivation of transmission or contrast. Therefore, optical limiting filters are needed that is transparent for low laser intensities and limit or block the high laser intensities. Metal nanoparticles are good candidates for such optical limiting filters for ns pulsed lasers or CW lasers due to their high damage thresholds. In this study we investigated the optical limiting performances of silver nanoparticles produced by laser ablation technique. A high purity silver target immersed in pure water was ablated with a Nd:YAG nanosecond laser at 532 nm. The effect of altering laser power and ablation time on laser ablation efficiency of nanoparticles was investigated experimentally and optimum values were specified. Open aperture Zscan experiment was used to investigate the effect of laser ablation parameters on the optical limiting performances of silver nanoparticles in pure water. It was found that longer ablation time decreases the optical limiting threshold. These results are useful for silver nanoparticles solutions to obtain high performance optical limiters.

  18. Photodiode-based cutting interruption sensor for near-infrared lasers.

    PubMed

    Adelmann, B; Schleier, M; Neumeier, B; Hellmann, R

    2016-03-01

    We report on a photodiode-based sensor system to detect cutting interruptions during laser cutting with a fiber laser. An InGaAs diode records the thermal radiation from the process zone with a ring mirror and optical filter arrangement mounted between a collimation unit and a cutting head. The photodiode current is digitalized with a sample rate of 20 kHz and filtered with a Chebyshev Type I filter. From the measured signal during the piercing, a threshold value is calculated. When the diode signal exceeds this threshold during cutting, a cutting interruption is indicated. This method is applied to sensor signals from cutting mild steel, stainless steel, and aluminum, as well as different material thicknesses and also laser flame cutting, showing the possibility to detect cutting interruptions in a broad variety of applications. In a series of 83 incomplete cuts, every cutting interruption is successfully detected (alpha error of 0%), while no cutting interruption is reported in 266 complete cuts (beta error of 0%). With this remarkable high detection rate and low error rate, the possibility to work with different materials and thicknesses in combination with the easy mounting of the sensor unit also to existing cutting machines highlight the enormous potential for this sensor system in industrial applications.

  19. A quantitative analysis of spectral mechanisms involved in auditory detection of coloration by a single wall reflection.

    PubMed

    Buchholz, Jörg M

    2011-07-01

    Coloration detection thresholds (CDTs) were measured for a single reflection as a function of spectral content and reflection delay for diotic stimulus presentation. The direct sound was a 320-ms long burst of bandpass-filtered noise with varying lower and upper cut-off frequencies. The resulting threshold data revealed that: (1) sensitivity decreases with decreasing bandwidth and increasing reflection delay and (2) high-frequency components contribute less to detection than low-frequency components. The auditory processes that may be involved in coloration detection (CD) are discussed in terms of a spectrum-based auditory model, which is conceptually similar to the pattern-transformation model of pitch (Wightman, 1973). Hence, the model derives an auto-correlation function of the input stimulus by applying a frequency analysis to an auditory representation of the power spectrum. It was found that, to successfully describe the quantitative behavior of the CDT data, three important mechanisms need to be included: (1) auditory bandpass filters with a narrower bandwidth than classic Gammatone filters, the increase in spectral resolution was here linked to cochlear suppression, (2) a spectral contrast enhancement process that reflects neural inhibition mechanisms, and (3) integration of information across auditory frequency bands. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Effect of efferent activation on binaural frequency selectivity.

    PubMed

    Verhey, Jesko L; Kordus, Monika; Drga, Vit; Yasin, Ifat

    2017-07-01

    Binaural notched-noise experiments indicate a reduced frequency selectivity of the binaural system compared to monaural processing. The present study investigates how auditory efferent activation (via the medial olivocochlear system) affects binaural frequency selectivity in normal-hearing listeners. Thresholds were measured for a 1-kHz signal embedded in a diotic notched-noise masker for various notch widths. The signal was either presented in phase (diotic) or in antiphase (dichotic), gated with the noise. Stimulus duration was 25 ms, in order to avoid efferent activation due to the masker or the signal. A bandpass-filtered noise precursor was presented prior to the masker and signal stimuli to activate the efferent system. The silent interval between the precursor and the masker-signal complex was 50 ms. For comparison, thresholds for detectability of the masked signal were also measured in a baseline condition without the precursor and, in addition, without the masker. On average, the results of the baseline condition indicate an effectively wider binaural filter, as expected. For both signal phases, the addition of the precursor results in effectively wider filters, which is in agreement with the hypothesis that cochlear gain is reduced due to the presence of the precursor. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Clock distribution system for digital computers

    DOEpatents

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  2. Dynamic Key Management Schemes for Secure Group Access Control Using Hierarchical Clustering in Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Tsaur, Woei-Jiunn; Pai, Haw-Tyng

    2008-11-01

    The applications of group computing and communication motivate the requirement to provide group access control in mobile ad hoc networks (MANETs). The operation in MANETs' groups performs a decentralized manner and accommodated membership dynamically. Moreover, due to lack of centralized control, MANETs' groups are inherently insecure and vulnerable to attacks from both within and outside the groups. Such features make access control more challenging in MANETs. Recently, several researchers have proposed group access control mechanisms in MANETs based on a variety of threshold signatures. However, these mechanisms cannot actually satisfy MANETs' dynamic environments. This is because the threshold-based mechanisms cannot be achieved when the number of members is not up to the threshold value. Hence, by combining the efficient elliptic curve cryptosystem, self-certified public key cryptosystem and secure filter technique, we construct dynamic key management schemes based on hierarchical clustering for securing group access control in MANETs. Specifically, the proposed schemes can constantly accomplish secure group access control only by renewing the secure filters of few cluster heads, when a cluster head joins or leaves a cross-cluster. In such a new way, we can find that the proposed group access control scheme can be very effective for securing practical applications in MANETs.

  3. Spectro-temporal modulation masking patterns reveal frequency selectivity.

    PubMed

    Oetjen, Arne; Verhey, Jesko L

    2015-02-01

    The present study investigated the possibility that the human auditory system demonstrates frequency selectivity to spectro-temporal amplitude modulations. Threshold modulation depth for detecting sinusoidal spectro-temporal modulations was measured using a generalized masked threshold pattern paradigm with narrowband masker modulations. Four target spectro-temporal modulations were examined, differing in their temporal and spectral modulation frequencies: a temporal modulation of -8, 8, or 16 Hz combined with a spectral modulation of 1 cycle/octave and a temporal modulation of 4 Hz combined with a spectral modulation of 0.5 cycles/octave. The temporal center frequencies of the masker modulation ranged from 0.25 to 4 times the target temporal modulation. The spectral masker-modulation center-frequencies were 0, 0.5, 1, 1.5, and 2 times the target spectral modulation. For all target modulations, the pattern of average thresholds for the eight normal-hearing listeners was consistent with the hypothesis of a spectro-temporal modulation filter. Such a pattern of modulation-frequency sensitivity was predicted on the basis of psychoacoustical data for purely temporal amplitude modulations and purely spectral amplitude modulations. An analysis of separability indicates that, for the present data set, selectivity in the spectro-temporal modulation domain can be described by a combination of a purely spectral and a purely temporal modulation filter function.

  4. Infrared image background modeling based on improved Susan filtering

    NASA Astrophysics Data System (ADS)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  5. SU-E-J-261: The Importance of Appropriate Image Preprocessing to Augment the Information of Radiomics Image Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L; Fried, D; Fave, X

    Purpose: To investigate how different image preprocessing techniques, their parameters, and the different boundary handling techniques can augment the information of features and improve feature’s differentiating capability. Methods: Twenty-seven NSCLC patients with a solid tumor volume and no visually obvious necrotic regions in the simulation CT images were identified. Fourteen of these patients had a necrotic region visible in their pre-treatment PET images (necrosis group), and thirteen had no visible necrotic region in the pre-treatment PET images (non-necrosis group). We investigated how image preprocessing can impact the ability of radiomics image features extracted from the CT to differentiate between twomore » groups. It is expected the histogram in the necrosis group is more negatively skewed, and the uniformity from the necrosis group is less. Therefore, we analyzed two first order features, skewness and uniformity, on the image inside the GTV in the intensity range [−20HU, 180HU] under the combination of several image preprocessing techniques: (1) applying the isotropic Gaussian or anisotropic diffusion smoothing filter with a range of parameter(Gaussian smoothing: size=11, sigma=0:0.1:2.3; anisotropic smoothing: iteration=4, kappa=0:10:110); (2) applying the boundaryadapted Laplacian filter; and (3) applying the adaptive upper threshold for the intensity range. A 2-tailed T-test was used to evaluate the differentiating capability of CT features on pre-treatment PT necrosis. Result: Without any preprocessing, no differences in either skewness or uniformity were observed between two groups. After applying appropriate Gaussian filters (sigma>=1.3) or anisotropic filters(kappa >=60) with the adaptive upper threshold, skewness was significantly more negative in the necrosis group(p<0.05). By applying the boundary-adapted Laplacian filtering after the appropriate Gaussian filters (0.5 <=sigma<=1.1) or anisotropic filters(20<=kappa <=50), the uniformity was significantly lower in the necrosis group (p<0.05). Conclusion: Appropriate selection of image preprocessing techniques allows radiomics features to extract more useful information and thereby improve prediction models based on these features.« less

  6. Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble

    PubMed Central

    Yongye, Austin B.; Bender, Andreas

    2010-01-01

    Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged-RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged-RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1–4), medium (5–9) and high (10–15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments. Electronic supplementary material The online version of this article (doi:10.1007/s10822-010-9365-1) contains supplementary material, which is available to authorized users. PMID:20499135

  7. Tuning and Robustness Analysis for the Orion Absolute Navigation System

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; Zanetti, Renato; D'Souza, Christopher

    2013-01-01

    The Orion Multi-Purpose Crew Vehicle (MPCV) is currently under development as NASA's next-generation spacecraft for exploration missions beyond Low Earth Orbit. The MPCV is set to perform an orbital test flight, termed Exploration Flight Test 1 (EFT-1), some time in late 2014. The navigation system for the Orion spacecraft is being designed in a Multi-Organizational Design Environment (MODE) team including contractor and NASA personnel. The system uses an Extended Kalman Filter to process measurements and determine the state. The design of the navigation system has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to show the efforts made to-date in tuning the filter for the EFT-1 mission and instilling appropriate robustness into the system to meet the requirements of manned space ight. Filter performance is affected by many factors: data rates, sensor measurement errors, tuning, and others. This paper focuses mainly on the error characterization and tuning portion. Traditional efforts at tuning a navigation filter have centered around the observation/measurement noise and Gaussian process noise of the Extended Kalman Filter. While the Orion MODE team must certainly address those factors, the team is also looking at residual edit thresholds and measurement underweighting as tuning tools. Tuning analysis is presented with open loop Monte-Carlo simulation results showing statistical errors bounded by the 3-sigma filter uncertainty covariance. The Orion filter design uses 24 Exponentially Correlated Random Variable (ECRV) parameters to estimate the accel/gyro misalignment and nonorthogonality. By design, the time constant and noise terms of these ECRV parameters were set to manufacturer specifications and not used as tuning parameters. They are included in the filter as a more analytically correct method of modeling uncertainties than ad-hoc tuning of the process noise. Tuning is explored for the powered-flight ascent phase, where measurements are scarce and unmodelled vehicle accelerations dominate. On orbit, there are important trade-off cases between process and measurement noise. On entry, there are considerations about trading performance accuracy for robustness. Process Noise is divided into powered flight and coasting ight and can be adjusted for each phase and mode of the Orion EFT-1 mission. Measurement noise is used for the integrated velocity measurements during pad alignment. It is also used for Global Positioning System (GPS) pseudorange and delta- range measurements during the rest of the flight. The robustness effort has been focused on maintaining filter convergence and performance in the presence of unmodeled error sources. These include unmodeled forces on the vehicle and uncorrected errors on the sensor measurements. Orion uses a single-frequency, non-keyed GPS receiver, so the effects due to signal distortion in Earth's ionosphere and troposphere are present in the raw measurements. Results are presented showing the efforts to compensate for these errors as well as characterize the residual effect for measurement noise tuning. Another robustness tool in use is tuning the residual edit thresholds. The trade-off between noise tuning and edit thresholds is explored in the context of robustness to errors in dynamics models and sensor measurements. Measurement underweighting is also presented as a method of additional robustness when processing highly accurate measurements in the presence of large filter uncertainties.

  8. Advantages of Fast Ignition Scenarios with Two Hot Spots for Space Propulsion Systems

    NASA Astrophysics Data System (ADS)

    Shmatov, M. L.

    The use of the fast ignition scenarios with the attempts to create two hot spots in one blob of the compressed thermonuclear fuel or, briefly, scenarios with two hot spots in space propulsion systems is proposed. The model, predicting that for such scenarios the probability pf of failure of ignition of thermonuclear microexplosion can be significantly less than that for the similar scenarios with the attempts to create one hot spot in one blob of the compressed fuel, is presented. For space propulsion systems consuming a relatively large amount of propellant, a decrease in pf due to the choice of the scenario with two hot spots can result in large, for example, two-fold, increase in the payload mass. Other advantages of the scenarios with two hot spots and some problems related to them are considered.

  9. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  10. Bubble clustering in a glass of stout beer

    NASA Astrophysics Data System (ADS)

    Iwatsubo, Fumiya; Watamura, Tomoaki; Sugiyama, Kazuyasu

    2017-11-01

    To clarify why the texture in stout beer poured into a pint glass descends, we investigated local time development of the void fraction and velocity of bubbles. The propagation of the number density distribution, i.e. the texture, appearing near the inclined wall is observed. We visualized individual advective bubbles near the inclined wall by microscope and measured the local void fraction using brightness of images while the velocity of bubbles by means of Particle Tracking Velocimetry. As the result of measurements, we found the local void fraction and the bubbles advection velocity increase and decrease repeatedly with a time delay. We conclude the texture pattern is composed of fluid blobs which contain less bubbles; extruding and suction flows respectively toward and from the interior of the container form respectively in front and back of the blobs.

  11. Motion streaks do not influence the perceived position of stationary flashed objects.

    PubMed

    Pavan, Andrea; Bellacosa Marotti, Rosilari

    2012-01-01

    In the present study, we investigated whether motion streaks, produced by fast moving dots Geisler 1999, distort the positional map of stationary flashed objects producing the well-known motion-induced position shift illusion (MIPS). The illusion relies on motion-processing mechanisms that induce local distortions in the positional map of the stimulus which is derived by shape-processing mechanisms. To measure the MIPS, two horizontally offset Gaussian blobs, placed above and below a central fixation point, were flashed over two fields of dots moving in opposite directions. Subjects judged the position of the top Gaussian blob relative to the bottom one. The results showed that neither fast (motion streaks) nor slow moving dots influenced the perceived spatial position of the stationary flashed objects, suggesting that background motion does not interact with the shape-processing mechanisms involved in MIPS.

  12. Stripped interstellar gas in cluster cooling flows

    NASA Technical Reports Server (NTRS)

    Soker, Noam; Bregman, Joel N.; Sarazin, Craig L.

    1991-01-01

    It is suggested that nonlinear perturbations which lead to thermal instabilities in cooling flows might start as blobs of interstellar gas which are stipped out of cluster galaxies. Assuming that most of the gas produced by stellar mass loss in cluster galaxies is stripped from the galaxies, the total rate of such stripping is roughly 100 solar masses/yr, which is similar to the rates of cooling in cluster cooling flows. It is possible that a substantial portion of the cooling gas originates as blobs of interstellar gas stripped from galaxies. The magnetic fields within and outside of the low-entropy perturbations may help to maintain their identities by suppressing both thermal conduction and Kelvin-Helmholtz instabilities. These density fluctuations may disrupt the propagation of radio jets through the intracluster gas, which may be one mechanism for producing wideangle-tail radio galaxies.

  13. Method and apparatus for generating low energy nuclear particles

    DOEpatents

    Powell, J.R.; Reich, M.; Ludewig, H.; Todosow, M.

    1999-02-09

    A particle accelerator generates an input particle beam having an initial energy level above a threshold for generating secondary nuclear particles. A thin target is rotated in the path of the input beam for undergoing nuclear reactions to generate the secondary particles and correspondingly decrease energy of the input beam to about the threshold. The target produces low energy secondary particles and is effectively cooled by radiation and conduction. A neutron scatterer and a neutron filter are also used for preferentially degrading the secondary particles into a lower energy range if desired. 18 figs.

  14. Fantastic Four Galaxies with Planet (Artist Concept)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This artist's concept shows what the night sky might look like from a hypothetical planet around a star tossed out of an ongoing four-way collision between big galaxies (yellow blobs). NASA's Spitzer Space Telescope spotted this 'quadruple merger' of galaxies within a larger cluster of galaxies located nearly 5 billion light-years away.

    Though the galaxies appear intact, gravitational disturbances have caused them to stretch and twist, flinging billions of stars into space -- nearly three times as many stars as are in our Milky Way galaxy. The tossed stars are visible in the large plume emanating from the central, largest galaxy. If any of these stars have planets, their night skies would be filled with the monstrous merger, along with other galaxies in the cluster (smaller, bluish blobs).

    This cosmic smash-up is the largest known merger between galaxies of a similar size. While three of the galaxies are about the size of our Milky Way galaxy, the fourth (center of image) is three times as big. All four of the galaxies, as well as most other galaxies in the huge cluster, are blob-shaped ellipticals instead of spirals like the Milky Way.

    Ultimately, in about one hundred million years or so, the four galaxies E will unite into one. About half of the stars kicked out during the merger will fall back and join the new galaxy, making it one of the biggest galaxies in the universe.

  15. The effects of fractional wettability on microbial enhanced oil recovery

    NASA Astrophysics Data System (ADS)

    Wildenschild, D.; Armstrong, R. T.

    2011-12-01

    Microbial enhanced oil recovery (MEOR) is a tertiary oil recovery technology that has had inconsistent success at the field-scale, while lab-scale experiments are mostly successful. One potential reason for these inconsistencies is that the efficacy of MEOR in fractional-wet systems is unknown. Our MEOR strategy consists of the injection of ex situ produced metabolic byproducts produced by Bacillus mojavensis JF-2 (that lower interfacial tension via biosurfactant production) into fractional-wet cores containing residual oil. Fractional-wet cores tested were 50%, 25%, and 0% oil-wet and two different MEOR flooding solutions were tested; one solution contained both microbes and metabolic byproducts while the other contained only the metabolic byproducts. The columns were imaged with x-ray computed microtomography (CMT) after water flooding, and after MEOR, which allowed for the evaluation of the pore-scale processes taking place during MEOR and wettability effects. Results indicate that during MEOR the larger residual oil blobs in mostly fractional-wet pores and residual oil held under relatively low capillary pressures were the main fractions recovered, while residual oil blobs in purely oil-wet pores remained in place. Residual oil saturation, interfacial curvatures, and oil blob sizes were measured from the CMT images and used to develop a conceptual model for MEOR in fractional-wet systems. Overall, results indicate that MEOR was effective at recovering oil from fractional-wet systems with reported additional oil recovered (AOR) values between 44% and 80%; the highest AOR values were observed in the most oil-wet system.

  16. Automatic detection of axillary lymphadenopathy on CT scans of untreated chronic lymphocytic leukemia patients

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Hua, Jeremy; Chellappa, Vivek; Petrick, Nicholas; Sahiner, Berkman; Farooqui, Mohammed; Marti, Gerald; Wiestner, Adrian; Summers, Ronald M.

    2012-03-01

    Patients with chronic lymphocytic leukemia (CLL) have an increased frequency of axillary lymphadenopathy. Pretreatment CT scans can be used to upstage patients at the time of presentation and post-treatment CT scans can reduce the number of complete responses. In the current clinical workflow, the detection and diagnosis of lymph nodes is usually performed manually by examining all slices of CT images, which can be time consuming and highly dependent on the observer's experience. A system for automatic lymph node detection and measurement is desired. We propose a computer aided detection (CAD) system for axillary lymph nodes on CT scans in CLL patients. The lung is first automatically segmented and the patient's body in lung region is extracted to set the search region for lymph nodes. Multi-scale Hessian based blob detection is then applied to detect potential lymph nodes within the search region. Next, the detected potential candidates are segmented by fast level set method. Finally, features are calculated from the segmented candidates and support vector machine (SVM) classification is utilized for false positive reduction. Two blobness features, Frangi's and Li's, are tested and their free-response receiver operating characteristic (FROC) curves are generated to assess system performance. We applied our detection system to 12 patients with 168 axillary lymph nodes measuring greater than 10 mm. All lymph nodes are manually labeled as ground truth. The system achieved sensitivities of 81% and 85% at 2 false positives per patient for Frangi's and Li's blobness, respectively.

  17. Signatures of the impact of flare-ejected plasma on the photosphere of a sunspot light bridge

    NASA Astrophysics Data System (ADS)

    Felipe, T.; Collados, M.; Khomenko, E.; Rajaguru, S. P.; Franz, M.; Kuckein, C.; Asensio Ramos, A.

    2017-12-01

    Aims: We investigate the properties of a sunspot light bridge, focusing on the changes produced by the impact of a plasma blob ejected from a C-class flare. Methods: We observed a sunspot in active region NOAA 12544 using spectropolarimetric raster maps of the four Fe I lines around 15 655 Å with the GREGOR Infrared Spectrograph, narrow-band intensity images sampling the Fe I 6173 Å line with the GREGOR Fabry-Pérot Interferometer, and intensity broad-band images in G-band and Ca II H-band with the High-resolution Fast Imager. All these instruments are located at the GREGOR telescope at the Observatorio del Teide, Tenerife, Spain. The data cover the time before, during, and after the flare event. The analysis is complemented with Atmospheric Imaging Assembly and Helioseismic and Magnetic Imager data from the Solar Dynamics Observatory. The physical parameters of the atmosphere at differents heights were inferred using spectral-line inversion techniques. Results: We identify photospheric and chromospheric brightenings, heating events, and changes in the Stokes profiles associated with the flare eruption and the subsequent arrival of the plasma blob to the light bridge, after traveling along an active region loop. Conclusions: The measurements suggest that these phenomena are the result of reconnection events driven by the interaction of the plasma blob with the magnetic field topology of the light bridge. Movies attached to Figs. 1 and 3 are available at http://www.aanda.org

  18. Development of a pore network simulation model to study nonaqueous phase liquid dissolution

    USGS Publications Warehouse

    Dillard, Leslie A.; Blunt, Martin J.

    2000-01-01

    A pore network simulation model was developed to investigate the fundamental physics of nonequilibrium nonaqueous phase liquid (NAPL) dissolution. The network model is a lattice of cubic chambers and rectangular tubes that represent pore bodies and pore throats, respectively. Experimental data obtained by Powers [1992] were used to develop and validate the model. To ensure the network model was representative of a real porous medium, the pore size distribution of the network was calibrated by matching simulated and experimental drainage and imbibition capillary pressure‐saturation curves. The predicted network residual styrene blob‐size distribution was nearly identical to the observed distribution. The network model reproduced the observed hydraulic conductivity and produced relative permeability curves that were representative of a poorly consolidated sand. Aqueous‐phase transport was represented by applying the equation for solute flux to the network tubes and solving for solute concentrations in the network chambers. Complete mixing was found to be an appropriate approximation for calculation of chamber concentrations. Mass transfer from NAPL blobs was represented using a corner diffusion model. Predicted results of solute concentration versus Peclet number and of modified Sherwood number versus Peclet number for the network model compare favorably with experimental data for the case in which NAPL blob dissolution was negligible. Predicted results of normalized effluent concentration versus pore volume for the network were similar to the experimental data for the case in which NAPL blob dissolution occurred with time.

  19. Identifying living and sentient kinds from dynamic information: the case of goal-directed versus aimless autonomous movement in conceptual change.

    PubMed

    Opfer, John E

    2002-12-01

    To reason competently about novel entities, people must discover whether the entity is alive and/or sentient. Exactly how people make this discovery is unknown, although past researchers have proposed that young children--unlike adults--rely chiefly on whether the object can move itself. This study examined the effect of goal-directed versus aimless autonomous movement on children's and adults' attributions of biological and psychological capacities in an effort to test whether goal-directedness affects inferences across documented periods of change in biological reasoning. Half of the participants (adults, and 4-, 5-, 7-, and 10-year-olds; Ns=32) were shown videos of unfamiliar blobs moving independently and aimlessly, and the other half were shown videos of identical blobs moving identically but toward a goal. No age group was likely to attribute biological or psychological capacities to the aimless self-moving blobs. However, for 5-year-olds through adults, goal-directed movement reliably elicited life judgments, and it elicited more biological and psychological attributions overall. Adults differed from children in that goal-directed movement affected their attributions of biological properties more than their attributions of psychological properties. The results suggest that both young children and adults consider the capacity for goal-directed movement to be a decisive factor in determining whether something unfamiliar is alive, though other factors may be important in deciding whether the thing is sentient.

  20. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  1. SMSS J130522.47-293113.0: a high-latitude stellar X-ray source with pc-scale outflow relics?

    NASA Astrophysics Data System (ADS)

    Da Costa, G. S.; Soria, R.; Farrell, S. A.; Bayliss, D.; Bessell, M. S.; Vogt, F. P. A.; Zhou, G.; Points, S. D.; Beers, T. C.; López-Sánchez, Á. R.; Bannister, K. W.; Bell, M.; Hancock, P. J.; Burlon, D.; Gaensler, B. M.; Sadler, E. M.; Tingay, S.; Keller, S. C.; Schmidt, B. P.; Tisserand, P.

    2018-06-01

    We report the discovery of an unusual stellar system SMSS J130522.47-293113.0. The optical spectrum is dominated by a blue continuum together with emission lines of hydrogen, neutral, and ionized helium, and the N III, C III blend at ˜4640-4650 Å. The emission-line profiles vary in strength and position on time-scales as short as 1 d, while optical photometry reveals fluctuations of as much as ˜0.2 mag in g on time-scales as short as 10-15 min. The system is a weak X-ray source (f0.3-10 = (1.2 ± 0.1) × 10-13 ergs cm2 s-1 in the 0.3-10 keV band) but is not detected at radio wavelengths (3σ upper limit of 50 μJy at 5.5 GHz). The most intriguing property of the system, however, is the existence of two `blobs', a few arcsec in size, that are symmetrically located 3{^'.}8 (2.2 pc for our preferred system distance of ˜2 kpc) each side of the central object. The blobs are detected in optical and near-IR broad-band images but do not show any excess emission in H α images. We discuss the interpretation of the system, suggesting that the central object is most likely a nova-like CV, and that the blobs are relics of a pc-scale accretion-powered collimated outflow.

  2. Robotic Vision, Tray-Picking System Design Using Multiple, Optical Matched Filters

    NASA Astrophysics Data System (ADS)

    Leib, Kenneth G.; Mendelsohn, Jay C.; Grieve, Philip G.

    1986-10-01

    The optical correlator is applied to a robotic vision, tray-picking problem. Complex matched filters (MFs) are designed to provide sufficient optical memory for accepting any orientation of the desired part, and a multiple holographic lens (MHL) is used to increase the memory for continuous coverage. It is shown that with appropriate thresholding a small part can be selected using optical matched filters. A number of criteria are presented for optimizing the vision system. Two of the part-filled trays that Mendelsohn used are considered in this paper which is the analog (optical) expansion of his paper. Our view in this paper is that of the optical correlator as a cueing device for subsequent, finer vision techniques.

  3. Discrimination of binocular color mixtures in dichromacy: evaluation of the Maxwell-Cornsweet conjecture

    NASA Astrophysics Data System (ADS)

    Knoblauch, Kenneth; McMahon, Matthew J.

    1995-10-01

    We tested the Maxwell-Cornsweet conjecture that differential spectral filtering of the two eyes can increase the dimensionality of a dichromat's color vision. Sex-linked dichromats wore filters that differentially passed long- and middle-wavelength regions of the spectrum to each eye. Monocularly, temporal modulation thresholds (1.5 Hz) for color mixtures from the Rayleigh region of the spectrum were accounted for by a single, univariant mechanism. Binocularly, univariance was rejected because, as in monocular viewing by trichromats, in no color direction could silent substitution of the color mixtures be obtained. Despite the filter-aided increase in dimension, estimated wavelength discrimination was quite poor in this spectral region, suggesting a limit to the effectiveness of this technique. binocular summation.

  4. A Comparative Study of Different Deblurring Methods Using Filters

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Kavitha, S.

    2011-12-01

    This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.

  5. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera

    PubMed Central

    Yaghoobi Ershadi, Nastaran

    2017-01-01

    Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions. PMID:29261719

  6. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera.

    PubMed

    Yaghoobi Ershadi, Nastaran

    2017-01-01

    Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions.

  7. Automated framework for estimation of lung tumor locations in kV-CBCT images for tumor-based patient positioning in stereotactic lung body radiotherapy

    NASA Astrophysics Data System (ADS)

    Yoshidome, Satoshi; Arimura, Hidetaka; Terashima, Koutarou; Hirakawa, Masakazu; Hirose, Taka-aki; Fukunaga, Junichi; Nakamura, Yasuhiko

    2017-03-01

    Recently, image-guided radiotherapy (IGRT) systems using kilovolt cone-beam computed tomography (kV-CBCT) images have become more common for highly accurate patient positioning in stereotactic lung body radiotherapy (SLBRT). However, current IGRT procedures are based on bone structures and subjective correction. Therefore, the aim of this study was to evaluate the proposed framework for automated estimation of lung tumor locations in kV-CBCT images for tumor-based patient positioning in SLBRT. Twenty clinical cases are considered, involving solid, pure ground-glass opacity (GGO), mixed GGO, solitary, and non-solitary tumor types. The proposed framework consists of four steps: (1) determination of a search region for tumor location detection in a kV-CBCT image; (2) extraction of a tumor template from a planning CT image; (3) preprocessing for tumor region enhancement (edge and tumor enhancement using a Sobel filter and a blob structure enhancement (BSE) filter, respectively); and (4) tumor location estimation based on a template-matching technique. The location errors in the original, edge-, and tumor-enhanced images were found to be 1.2 ± 0.7 mm, 4.2 ± 8.0 mm, and 2.7 ± 4.6 mm, respectively. The location errors in the original images of solid, pure GGO, mixed GGO, solitary, and non-solitary types of tumors were 1.2 ± 0.7 mm, 1.3 ± 0.9 mm, 0.4 ± 0.6 mm, 1.1 ± 0.8 mm and 1.0 ± 0.7 mm, respectively. These results suggest that the proposed framework is robust as regards automatic estimation of several types of tumor locations in kV-CBCT images for tumor-based patient positioning in SLBRT.

  8. GREAT: a gradient-based color-sampling scheme for Retinex.

    PubMed

    Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo

    2017-04-01

    Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.

  9. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    PubMed

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.

  10. Microscopy mineral image enhancement based on improved adaptive threshold in nonsubsampled shearlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Liangliang; Si, Yujuan; Jia, Zhenhong

    2018-03-01

    In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.

  11. Impaired Filtering of Behaviourally Irrelevant Visual Information in Dyslexia

    ERIC Educational Resources Information Center

    Roach, Neil W.; Hogben, John H.

    2007-01-01

    A recent proposal suggests that dyslexic individuals suffer from attentional deficiencies, which impair the ability to selectively process incoming visual information. To investigate this possibility, we employed a spatial cueing procedure in conjunction with a single fixation visual search task measuring thresholds for discriminating the…

  12. Topological Filtering of Dynamic Functional Brain Networks Unfolds Informative Chronnectomics: A Novel Data-Driven Thresholding Scheme Based on Orthogonal Minimal Spanning Trees (OMSTs)

    PubMed Central

    Dimitriadis, Stavros I.; Salis, Christos; Tarnanas, Ioannis; Linden, David E.

    2017-01-01

    The human brain is a large-scale system of functionally connected brain regions. This system can be modeled as a network, or graph, by dividing the brain into a set of regions, or “nodes,” and quantifying the strength of the connections between nodes, or “edges,” as the temporal correlation in their patterns of activity. Network analysis, a part of graph theory, provides a set of summary statistics that can be used to describe complex brain networks in a meaningful way. The large-scale organization of the brain has features of complex networks that can be quantified using network measures from graph theory. The adaptation of both bivariate (mutual information) and multivariate (Granger causality) connectivity estimators to quantify the synchronization between multichannel recordings yields a fully connected, weighted, (a)symmetric functional connectivity graph (FCG), representing the associations among all brain areas. The aforementioned procedure leads to an extremely dense network of tens up to a few hundreds of weights. Therefore, this FCG must be filtered out so that the “true” connectivity pattern can emerge. Here, we compared a large number of well-known topological thresholding techniques with the novel proposed data-driven scheme based on orthogonal minimal spanning trees (OMSTs). OMSTs filter brain connectivity networks based on the optimization between the global efficiency of the network and the cost preserving its wiring. We demonstrated the proposed method in a large EEG database (N = 101 subjects) with eyes-open (EO) and eyes-closed (EC) tasks by adopting a time-varying approach with the main goal to extract features that can totally distinguish each subject from the rest of the set. Additionally, the reliability of the proposed scheme was estimated in a second case study of fMRI resting-state activity with multiple scans. Our results demonstrated clearly that the proposed thresholding scheme outperformed a large list of thresholding schemes based on the recognition accuracy of each subject compared to the rest of the cohort (EEG). Additionally, the reliability of the network metrics based on the fMRI static networks was improved based on the proposed topological filtering scheme. Overall, the proposed algorithm could be used across neuroimaging and multimodal studies as a common computationally efficient standardized tool for a great number of neuroscientists and physicists working on numerous of projects. PMID:28491032

  13. Local curvature measurements of a lean, partially premixed swirl-stabilised flame

    NASA Astrophysics Data System (ADS)

    Bayley, Alan E.; Hardalupas, Yannis; Taylor, Alex M. K. P.

    2012-04-01

    A swirl-stabilised, lean, partially premixed combustor operating at atmospheric conditions has been used to investigate the local curvature distributions in lifted, stable and thermoacoustically oscillating CH4-air partially premixed flames for bulk cold-flow Reynolds numbers of 15,000 and 23,000. Single-shot OH planar laser-induced fluorescence has been used to capture instantaneous images of these three different flame types. Use of binary thresholding to identify the reactant and product regions in the OH planar laser-induced fluorescence images, in order to extract accurate flame-front locations, is shown to be unsatisfactory for the examined flames. The Canny-Deriche edge detection filter has also been examined and is seen to still leave an unacceptable quantity of artificial flame-fronts. A novel approach has been developed for image analysis where a combination of a non-linear diffusion filter, Sobel gradient and threshold-based curve elimination routines have been used to extract traces of the flame-front to obtain local curvature distributions. A visual comparison of the effectiveness of flame-front identification is made between the novel approach, the threshold binarisation filter and the Canny-Deriche filter. The novel approach appears to most accurately identify the flame-fronts. Example histograms of the curvature for six flame conditions and of the total image area are presented and are found to have a broader range of local flame curvatures for increasing bulk Reynolds numbers. Significantly positive values of mean curvature and marginally positive values of skewness of the histogram have been measured for one lifted flame case, but this is generally accounted for by the effect of flame brush curvature. The mean local flame-front curvature reduces with increasing axial distance from the burner exit plane for all flame types. These changes are more pronounced in the lifted flames but are marginal for the thermoacoustically oscillating flames. It is concluded that additional fuel mixture fraction and velocimetry studies are required to examine whether processes such as the degree of partial-premixedness close to the burner exit plane, the velocity field and the turbulence field have a strong correlation with the curvature characteristics of the investigated flames.

  14. Splashing, feeding, contracting: Drop impact and fluid dynamics of Vorticella

    NASA Astrophysics Data System (ADS)

    Pepper, Rachel E.

    This thesis comprises two main topics: understanding drop impact and splashing, and studying the feeding and contracting of the microorganism Vorticella. In Chapter 1, we study the effect of substrate compliance on the splash threshold of a liquid drop using an elastic membrane under variable tension. We find that splashing can be suppressed by reducing this tension. Measurements of the velocity and acceleration of the spreading drop after impact indicate that the splashing behavior is set at very early times after, or possibly just before, impact, far before the actual splash occurs. We also provide a model for the tension dependence of the splashing threshold. In Chapter 2, we study the evolution of the ejected liquid sheet, or lamella, created after impact of a liquid drop onto a solid surface using high-speed video. We find that the lamella rim thickness is always much larger than the boundary layer thickness, and that this thickness decreases with increasing impact speed. We also observe an unusual plateau behavior in thickness versus time at higher impact speeds as we approach the splash threshold. In Chapter 3, we show through calculations, simulations, and experiments that the eddies often observed near sessile filter feeders are due to the presence of nearby boundaries. We model the common filter feeder Vorticella, and also track particles around live feeding Vorticella to determine the experimental flow field. Our models are in good agreement both with each other and with the experiments. We also provide simple approximate equations to predict experimental eddy sizes due to boundaries. In Chapter 4, we show through calculations that filter feeders such as Vorticella can greatly enhance their nutrient uptake by feeding at an angle rather than perpendicular to a substrate. We also show experimental evidence that living Vorticella use this strategy. Finally, in Chapter 5, we discuss possible future directions for these projects, including potential insights from a close examination of lamella behavior at the splash threshold, and calculations to determine if Vorticella contract rapidly towards the substrate to which they are attached in order to mix the surrounding fluid.

  15. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  16. Fundus-controlled two-color dark adaptometry with the Microperimeter MP1.

    PubMed

    Bowl, Wadim; Stieger, Knut; Lorenz, Birgit

    2015-06-01

    The aim of this study was to provide fundus-controlled two-color adaptometry with an existing device. A quick and easy approach extends the application possibilities of a commercial fundus-controlled perimeter. An external filter holder was placed in front the objective lens of the MP1 (Nidek, Italy) and fitted with filters to modify background, stimulus intensity, and color. Prior to dark adaptometry, the subject's visual sensitivity profile was measured for red and blue stimuli to determine whether rods or cones or both mediated the absolute threshold. After light adaptation, 20 healthy subjects were investigated with a pattern covering six spots at the posterior pole of the retina up to 45 min of dark adaptation. Thresholds were determined using a 200 ms red Goldmann IV and a blue Goldmann II stimulus. The pre-test sensitivity showed a typical distribution of values along the meridian, with high peripheral light increment sensitivity (LIS) and low central LIS for rods and the reverse for cones. After bleach, threshold recovery had a classic biphasic shape. The absolute threshold was reached after approximately 10 min for the red and 15 min for the blue stimulus. Two-color fundus-controlled adaptometry with a commercial MP1 without internal changes to the device provides a quick and easy examination of rod and cone function during dark adaptation at defined retinal loci of the posterior pole. This innovative method will be helpful to measure rod vs. cone function at known loci of the posterior pole in early stages of retinal degenerations.

  17. Gove v. the Blob: The Coalition and Education

    ERIC Educational Resources Information Center

    Gillard, Derek

    2015-01-01

    The author provides a year-by-year account of events during the period of the Conservative-led coalition government from 2010 to 2015 and concludes with some observations on the damage done to England's state education system.

  18. A Tale of Two Comets: ISON

    NASA Image and Video Library

    2013-11-25

    An optical color image of galaxies is seen here overlaid with X-ray data magenta from NASA Nuclear Spectroscopic Telescope Array NuSTAR. Both magenta blobs show X-rays from massive black holes buried at the hearts of galaxies.

  19. Blob dynamics in TORPEX poloidal null configurations

    NASA Astrophysics Data System (ADS)

    Shanahan, B. W.; Dudson, B. D.

    2016-12-01

    3D blob dynamics are simulated in X-point magnetic configurations in the TORPEX device via a non-field-aligned coordinate system, using an isothermal model which evolves density, vorticity, parallel velocity and parallel current density. By modifying the parallel gradient operator to include perpendicular perturbations from poloidal field coils, numerical singularities associated with field aligned coordinates are avoided. A comparison with a previously developed analytical model (Avino 2016 Phys. Rev. Lett. 116 105001) is performed and an agreement is found with minimal modification. Experimental comparison determines that the null region can cause an acceleration of filaments due to increasing connection length, but this acceleration is small relative to other effects, which we quantify. Experimental measurements (Avino 2016 Phys. Rev. Lett. 116 105001) are reproduced, and the dominant acceleration mechanism is identified as that of a developing dipole in a moving background. Contributions from increasing connection length close to the null point are a small correction.

  20. Computer-assisted image processing to detect spores from the fungus Pandora neoaphidis.

    PubMed

    Korsnes, Reinert; Westrum, Karin; Fløistad, Erling; Klingen, Ingeborg

    2016-01-01

    This contribution demonstrates an example of experimental automatic image analysis to detect spores prepared on microscope slides derived from trapping. The application is to monitor aerial spore counts of the entomopathogenic fungus Pandora neoaphidis which may serve as a biological control agent for aphids. Automatic detection of such spores can therefore play a role in plant protection. The present approach for such detection is a modification of traditional manual microscopy of prepared slides, where autonomous image recording precedes computerised image analysis. The purpose of the present image analysis is to support human visual inspection of imagery data - not to replace it. The workflow has three components:•Preparation of slides for microscopy.•Image recording.•Computerised image processing where the initial part is, as usual, segmentation depending on the actual data product. Then comes identification of blobs, calculation of principal axes of blobs, symmetry operations and projection on a three parameter egg shape space.

  1. An adaptive front tracking technique for three-dimensional transient flows

    NASA Astrophysics Data System (ADS)

    Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.

    2000-01-01

    An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright

  2. Numerical Simulation of Liquid Jet Atomization Including Turbulence Effects

    NASA Technical Reports Server (NTRS)

    Trinh, Huu P.; Chen, C. P.; Balasubramanyam, M. S.

    2005-01-01

    This paper describes numerical implementation of a newly developed hybrid model, T-blob/T-TAB, into an existing computational fluid dynamics (CFD) program for primary and secondary breakup simulation of liquid jet atomization. This model extend two widely used models, the Kelvin-Helmholtz (KH) instability of Reitz (blob model) and the Taylor-Analogy-Breakup (TAB) secondary droplet breakup by O'Rourke and Amsden to include turbulence effects. In the primary breakup model, the level of the turbulence effect on the liquid breakup depends on the characteristic scales and the initial flow conditions. For the secondary breakup, an additional turbulence force acted on parent drops is modeled and integrated into the TAB governing equation. Several assessment studies are presented and the results indicate that the existing KH and TAB models tend to under-predict the product drop size and spray angle, while the current model provides superior results when compared with the measured data.

  3. HUBBLE HUNTS DOWN BINARY OBJECTS AT FRINGE OF OUR SOLAR SYSTEM

    NASA Technical Reports Server (NTRS)

    2002-01-01

    NASA's Hubble Space Telescope snapped pictures of a double system of icy bodies in the Kuiper Belt. This composite picture shows the apparent orbit of one member of the pair. In reality, the objects, called 1998 WW31, revolve around a common center of gravity, like a pair of waltzing skaters. This picture shows the motion of one member of the duo [the six faint blobs] relative to the other [the large white blob]. The blue oval represents the orbital path. Astronomers assembled this picture from six separate exposures, taken from July to September 2001, December 2001, and January to February 2002. Astronomers used the Hubble telescope to study the orbit of this binary system. They then used that information to determine other characteristics of the duo, such as their total mass, and their orbital period (the time it takes them to orbit each other). Credit: NASA and C. Veillet (Canada-France-Hawaii Telescope)

  4. Ultrasonic imaging system for in-process fabric defect detection

    DOEpatents

    Sheen, Shuh-Haw; Chien, Hual-Te; Lawrence, William P.; Raptis, Apostolos C.

    1997-01-01

    An ultrasonic method and system are provided for monitoring a fabric to identify a defect. A plurality of ultrasonic transmitters generate ultrasonic waves relative to the fabric. An ultrasonic receiver means responsive to the generated ultrasonic waves from the transmitters receives ultrasonic waves coupled through the fabric and generates a signal. An integrated peak value of the generated signal is applied to a digital signal processor and is digitized. The digitized signal is processed to identify a defect in the fabric. The digitized signal processing includes a median value filtering step to filter out high frequency noise. Then a mean value and standard deviation of the median value filtered signal is calculated. The calculated mean value and standard deviation are compared with predetermined threshold values to identify a defect in the fabric.

  5. Characteristics of spectro-temporal modulation frequency selectivity in humans.

    PubMed

    Oetjen, Arne; Verhey, Jesko L

    2017-03-01

    There is increasing evidence that the auditory system shows frequency selectivity for spectro-temporal modulations. A recent study of the authors has shown spectro-temporal modulation masking patterns that were in agreement with the hypothesis of spectro-temporal modulation filters in the human auditory system [Oetjen and Verhey (2015). J. Acoust. Soc. Am. 137(2), 714-723]. In the present study, that experimental data and additional data were used to model this spectro-temporal frequency selectivity. The additional data were collected to investigate to what extent the spectro-temporal modulation-frequency selectivity results from a combination of a purely temporal amplitude-modulation filter and a purely spectral amplitude-modulation filter. In contrast to the previous study, thresholds were measured for masker and target modulations with opposite directions, i.e., an upward pointing target modulation and a downward pointing masker modulation. The comparison of this data set with previous corresponding data with the same direction from target and masker modulations indicate that a specific spectro-temporal modulation filter is required to simulate all aspects of spectro-temporal modulation frequency selectivity. A model using a modified Gabor filter with a purely temporal and a purely spectral filter predicts the spectro-temporal modulation masking data.

  6. Multiple targets detection method in detection of UWB through-wall radar

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Yang, Chuanfa; Zhao, Xingwen; Tian, Xianzhong

    2017-11-01

    In this paper, the problems and difficulties encountered in the detection of multiple moving targets by UWB radar are analyzed. The experimental environment and the penetrating radar system are established. An adaptive threshold method based on local area is proposed to effectively filter out clutter interference The objective of the moving target is analyzed, and the false target is further filtered out by extracting the target feature. Based on the correlation between the targets, the target matching algorithm is proposed to improve the detection accuracy. Finally, the effectiveness of the above method is verified by practical experiment.

  7. Effects of the Blob on settlement of spotted sand bass, Paralabrax maculatofasciatus, to Mission Bay, San Diego, CA.

    PubMed

    Basilio, Anthony; Searcy, Steven; Thompson, Andrew R

    2017-01-01

    The West Coast of the United States experienced variable and sometimes highly unusual oceanographic conditions between 2012 and 2015. In particular, a warm mass of surface water known as the Pacific Warm Anomaly (popularly as "The Blob") impinged on southern California in 2014, and warm-water conditions remained during the 2015 El Niño. We examine how this oceanographic variability affected delivery and individual characteristics of larval spotted sand bass (Paralabrax maculatofasciatus) to an estuarine nursery habitat in southern California. To quantify P. maculatofasciatus settlement patterns, three larval collectors were installed near the mouth of Mission Bay, San Diego CA, and retrieved weekly from June-October of 2012-2015. During 'Blob' conditions in 2014 and 2015, lower settlement rates of spotted sand bass were associated with higher sea surface temperature and lower wind speed, chlorophyll a (chl a) and upwelling. Overall, the number of settlers per day peaked at intermediate chl a values across weeks. Individual characteristics of larvae that settled in 2014-2015 were consistent with a poor feeding environment. Although settlers were longer in length in 2014-15, fish in these years had slower larval otolith growth, a longer larval duration, and a trend towards lower condition, traits that are often associated with lower survival and recruitment. This study suggests that future settlement and recruitment of P. maculatofasciatus and other fishes with similar life histories may be adversely affected in southern California if ocean temperatures continue to rise in the face of climate change.

  8. Distribution and Recovery of Crude Oil in Various Types of Porous Media and Heterogeneity Configurations

    NASA Astrophysics Data System (ADS)

    Tick, G. R.; Ghosh, J.; Greenberg, R. R.; Akyol, N. H.

    2015-12-01

    A series of pore-scale experiments were conducted to understand the interfacial processes contributing to the removal of crude oil from various porous media during surfactant-induced remediation. Effects of physical heterogeneity (i.e. media uniformity) and carbonate soil content on oil recovery and distribution were evaluated through pore scale quantification techniques. Additionally, experiments were conducted to evaluate impacts of tetrachloroethene (PCE) content on crude oil distribution and recovery under these same conditions. Synchrotron X-ray microtomography (SXM) was used to obtain high-resolution images of the two-fluid-phase oil/water system, and quantify temporal changes in oil blob distribution, blob morphology, and blob surface area before and after sequential surfactant flooding events. The reduction of interfacial tension in conjunction with the sufficient increase in viscous forces as a result of surfactant flushing was likely responsible for mobilization and recovery of lighter fractions of crude oil. Corresponding increases in viscous forces were insufficient to initiate and maintain the displacement of the heavy crude oil in more homogeneous porous media systems during surfactant flushing. Interestingly, higher relative recoveries of heavy oil fractions were observed within more heterogeneous porous media indicating that wettability may be responsible for controlling mobilization in these systems. Compared to the "pure" crude oil experiments, preliminary results show that crude oil with PCE produced variability in oil distribution and recovery before and after each surfactant-flooding event. Such effects were likely influenced by viscosity and interfacial tension modifications associated with the crude-oil/solvent mixed systems.

  9. Gravitational Core-Mantle Coupling and the Acceleration of the Earth

    NASA Technical Reports Server (NTRS)

    Rubincam, David Parry; Smith, David E. (Technical Monitor)

    2001-01-01

    Gravitational core-mantle coupling may be the cause of the observed variable acceleration of the Earth's rotation on the 1000 year timescale. The idea is that density inhomogeneities which randomly come and go in the liquid outer core gravitationally attract density inhomogeneities in the mantle and crust, torquing the mantle and changing its rotation state. The corresponding torque by the mantle on the core may also explain the westward drift of the magnetic field of 0.2 deg per year. Gravitational core-mantle coupling would stochastically affect the rate of change of the Earth's obliquity by just a few per cent. Its contribution to polar wander would only be about 0.5% the presently observed rate. Tidal friction is slowing down the rotation of the Earth, overwhelming a smaller positive acceleration from postglacial rebound. Coupling between the liquid outer core of the Earth and the mantle has long been a suspected reason for changes in the length-of-day. The present investigation focuses on the gravitational coupling between the density anomalies in the convecting liquid outer core and those in the mantle and crust as a possible cause for the observed nonsecular acceleration on the millenial timescale. The basic idea is as follows. There are density inhomogeneities caused by blobs circulating in the outer core like the blobs in a lava lamp; thus the outer core's gravitational field is not featureless. Moreover, these blobs will form and dissipate somewhat randomly. Thus there will be a time variability to the fields. These density inhomogeneities will gravitationally attract the density anomalies in the mantle.

  10. Direct Observations of Magnetic Flux Rope Formation during a Solar Coronal Mass Ejection

    NASA Astrophysics Data System (ADS)

    Song, H. Q.; Zhang, J.; Chen, Y.; Cheng, X.

    2014-09-01

    Coronal mass ejections (CMEs) are the most spectacular eruptive phenomena in the solar atmosphere. It is generally accepted that CMEs are the results of eruptions of magnetic flux ropes (MFRs). However, there is heated debate on whether MFRs exist prior to the eruptions or if they are formed during the eruptions. Several coronal signatures, e.g., filaments, coronal cavities, sigmoid structures, and hot channels (or hot blobs), are proposed as MFRs and observed before the eruption, which support the pre-existing MFR scenario. There is almost no reported observation of MFR formation during the eruption. In this Letter, we present an intriguing observation of a solar eruptive event that occurred on 2013 November 21 with the Atmospheric Imaging Assembly on board the Solar Dynamic Observatory, which shows the formation process of the MFR during the eruption in detail. The process began with the expansion of a low-lying coronal arcade, possibly caused by the flare magnetic reconnection underneath. The newly formed ascending loops from below further pushed the arcade upward, stretching the surrounding magnetic field. The arcade and stretched magnetic field lines then curved in just below the arcade vertex, forming an X-point. The field lines near the X-point continued to approach each other and a second magnetic reconnection was induced. It is this high-lying magnetic reconnection that led to the formation and eruption of a hot blob (~10 MK), presumably an MFR, producing a CME. We suggest that two spatially separated magnetic reconnections occurred in this event, which were responsible for producing the flare and the hot blob (CME).

  11. Direct Observations of Magnetic Flux Rope Formation during a Solar Coronal Mass Ejection

    NASA Astrophysics Data System (ADS)

    Song, H.; Zhang, J.; Chen, Y.; Cheng, X.

    2014-12-01

    Coronal mass ejections (CMEs) are the most spectacular eruptive phenomena in the solar atmosphere. It is generally accepted that CMEs are results of eruptions of magnetic flux ropes (MFRs). However, a heated debate is on whether MFRs pre-exist before the eruptions or they are formed during the eruptions. Several coronal signatures, e.g., filaments, coronal cavities, sigmoid structures and hot channels (or hot blobs), are proposed as MFRs and observed before the eruption, which support the pre existing MFR scenario. There is almost no reported observation about MFR formation during the eruption. In this presentation, we present an intriguing observation of a solar eruptive event with the Atmospheric Imaging Assembly on board the Solar Dynamic Observatory, which shows a detailed formation process of the MFR during the eruption. The process started with the expansion of a low lying coronal arcade, possibly caused by the flare magnetic reconnection underneath. The newly-formed ascending loops from below further pushed the arcade upward, stretching the surrounding magnetic field. The arcade and stretched magnetic field lines then curved-in just below the arcade vertex, forming an X-point. The field lines near the X-point continued to approach each other and a second magnetic reconnection was induced. It is this high-lying magnetic reconnection that led to the formation and eruption of a hot blob (~ 10 MK), presumably a MFR, producing a CME. We suggest that two spatially-separated magnetic reconnections occurred in this event, responsible for producing the flare and the hot blob (CME), respectively.

  12. Unwinding the hairball graph: Pruning algorithms for weighted complex networks

    NASA Astrophysics Data System (ADS)

    Dianati, Navid

    2016-01-01

    Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.

  13. Distributed Event-Based Set-Membership Filtering for a Class of Nonlinear Systems With Sensor Saturations Over Sensor Networks.

    PubMed

    Ma, Lifeng; Wang, Zidong; Lam, Hak-Keung; Kyriakoulis, Nikos

    2017-11-01

    In this paper, the distributed set-membership filtering problem is investigated for a class of discrete time-varying system with an event-based communication mechanism over sensor networks. The system under consideration is subject to sector-bounded nonlinearity, unknown but bounded noises and sensor saturations. Each intelligent sensing node transmits the data to its neighbors only when certain triggering condition is violated. By means of a set of recursive matrix inequalities, sufficient conditions are derived for the existence of the desired distributed event-based filter which is capable of confining the system state in certain ellipsoidal regions centered at the estimates. Within the established theoretical framework, two additional optimization problems are formulated: one is to seek the minimal ellipsoids (in the sense of matrix trace) for the best filtering performance, and the other is to maximize the triggering threshold so as to reduce the triggering frequency with satisfactory filtering performance. A numerically attractive chaos algorithm is employed to solve the optimization problems. Finally, an illustrative example is presented to demonstrate the effectiveness and applicability of the proposed algorithm.

  14. Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments

    PubMed Central

    Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.

    2014-01-01

    This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120

  15. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  16. The performance of biological anaerobic filters packed with sludge-fly ash ceramic particles (SFCP) and commercial ceramic particles (CCP) during the restart period: effect of the C/N ratios and filter media.

    PubMed

    Yue, Qinyan; Han, Shuxin; Yue, Min; Gao, Baoyu; Li, Qian; Yu, Hui; Zhao, Yaqin; Qi, Yuanfeng

    2009-11-01

    Two lab-scale upflow biological anaerobic filters (BAF) packed with sludge-fly ash ceramic particles (SFCP) and commercial ceramic particles (CCP) were employed to investigate effects of the C/N ratios and filter media on the BAF performance during the restart period. The results indicated that BAF could be restarted normally after one-month cease. The C/N ratio of 4.0 was the thresholds of nitrate removal and nitrite accumulation. TN removal and phosphate uptake reached the maximum value at the same C/N ratio of 5.5. Ammonia formation was also found and excreted a negative influence on TN removal, especially when higher C/N ratios were applied. Nutrients were mainly degraded within the height of 25 cm from the bottom. In addition, SFCP, as novel filter media manufactured by wastes-dewatered sludge and fly ash, represented a better potential in inhibiting nitrite accumulation, TN removal and phosphate uptake due to their special characteristics in comparison with CCP.

  17. Method and apparatus for generating low energy nuclear particles

    DOEpatents

    Powell, James R.; Reich, Morris; Ludewig, Hans; Todosow, Michael

    1999-02-09

    A particle accelerator (12) generates an input particle beam having an initial energy level above a threshold for generating secondary nuclear particles. A thin target (14) is rotated in the path of the input beam for undergoing nuclear reactions to generate the secondary particles and correspondingly decrease energy of the input beam to about the threshold. The target (14) produces low energy secondary particles and is effectively cooled by radiation and conduction. A neutron scatterer (44) and a neutron filter (42) are also used for preferentially degrading the secondary particles into a lower energy range if desired.

  18. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  19. Method and apparatus for biological sequence comparison

    DOEpatents

    Marr, T.G.; Chang, W.I.

    1997-12-23

    A method and apparatus are disclosed for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence. 5 figs.

  20. Method and apparatus for biological sequence comparison

    DOEpatents

    Marr, Thomas G.; Chang, William I-Wei

    1997-01-01

    A method and apparatus for comparing biological sequences from a known source of sequences, with a subject (query) sequence. The apparatus takes as input a set of target similarity levels (such as evolutionary distances in units of PAM), and finds all fragments of known sequences that are similar to the subject sequence at each target similarity level, and are long enough to be statistically significant. The invention device filters out fragments from the known sequences that are too short, or have a lower average similarity to the subject sequence than is required by each target similarity level. The subject sequence is then compared only to the remaining known sequences to find the best matches. The filtering member divides the subject sequence into overlapping blocks, each block being sufficiently large to contain a minimum-length alignment from a known sequence. For each block, the filter member compares the block with every possible short fragment in the known sequences and determines a best match for each comparison. The determined set of short fragment best matches for the block provide an upper threshold on alignment values. Regions of a certain length from the known sequences that have a mean alignment value upper threshold greater than a target unit score are concatenated to form a union. The current block is compared to the union and provides an indication of best local alignment with the subject sequence.

  1. Continuous Particulate Filter State of Health Monitoring Using Radio Frequency Sensing

    DOE PAGES

    Sappok, Alexander; Ragaller, Paul; Herman, Andrew; ...

    2018-04-03

    Reliable means for on-board detection of particulate filter failures or malfunctions are needed to meet diagnostics (OBD) requirements. Detecting these failures, which result in tailpipe particulate matter (PM) emissions exceeding the OBD limit, over all operating conditions is challenging. Current approaches employ differential pressure sensors and downstream PM sensors, in combination with particulate filter and engine-out soot models. These conventional monitors typically operate over narrowly-defined time windows and do not provide a direct measure of the filter’s state of health. In contrast, radio frequency (RF) sensors, which transmit a wireless signal through the filter substrate provide a direct means formore » interrogating the condition of the filter itself. Here, this study investigated the use of RF sensors for the continuous measurement of filter trapping efficiency, which was compared to downstream measurements with an AVL Microsoot Sensor, and a PM sampling probe simulating the geometry and installation configuration of a conventional PM sensor. The study included several particulate filter failure modes, both above and below the OBD threshold. Finally, the results confirmed the use of RF sensors to provide a direct and continuous measure of the particulate filter’s state of health over a range of typical in-use operating conditions, thereby significantly increasing the time window over which filter failures may be detected.« less

  2. Continuous Particulate Filter State of Health Monitoring Using Radio Frequency Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sappok, Alexander; Ragaller, Paul; Herman, Andrew

    Reliable means for on-board detection of particulate filter failures or malfunctions are needed to meet diagnostics (OBD) requirements. Detecting these failures, which result in tailpipe particulate matter (PM) emissions exceeding the OBD limit, over all operating conditions is challenging. Current approaches employ differential pressure sensors and downstream PM sensors, in combination with particulate filter and engine-out soot models. These conventional monitors typically operate over narrowly-defined time windows and do not provide a direct measure of the filter’s state of health. In contrast, radio frequency (RF) sensors, which transmit a wireless signal through the filter substrate provide a direct means formore » interrogating the condition of the filter itself. Here, this study investigated the use of RF sensors for the continuous measurement of filter trapping efficiency, which was compared to downstream measurements with an AVL Microsoot Sensor, and a PM sampling probe simulating the geometry and installation configuration of a conventional PM sensor. The study included several particulate filter failure modes, both above and below the OBD threshold. Finally, the results confirmed the use of RF sensors to provide a direct and continuous measure of the particulate filter’s state of health over a range of typical in-use operating conditions, thereby significantly increasing the time window over which filter failures may be detected.« less

  3. Through the Ring of Fire: A Study of the Origin of Orphan Gamma-ray Flares in Blazars

    NASA Astrophysics Data System (ADS)

    MacDonald, Nicholas R.; Marscher, Alan P.; Jorstad, Svetlana G.; Joshi, Manasvita

    2014-06-01

    Blazars exhibit flares across the electromagnetic spectrum. Many gamma-ray flares are highly correlated with flares detected at optical wavelengths; however, a small subset appear to occur in isolation, with no counterpart in the other wave bands. These "orphan" gamma-ray flares challenge current models of blazar variability, most of which are unable to reproduce this type of behavior. We present numerical calculations of the time variable emission of a blazar based on a proposal by Marscher et al. (2010) to explain such events. In this model, a plasmoid ("blob") consisting of a power-law distribution of electrons propagates relativistically along the spine of a blazar jet and passes through a synchrotron emitting ring of electrons representing a shocked portion of the jet sheath. This ring supplies a source of seed photons that are inverse-Compton scattered by the electrons in the moving blob. As the blob approaches the ring, the photon density in the co-moving frame of the plasma increases, resulting in an orphan gamma-ray flare that then dissipates as the blob passes through and then moves away from the ring. The model includes the effects of radiative cooling and a spatially varying magnetic field. Support for the plausibility of this model is provided by observations by Marscher et al.(2010) of an isolated gamma-ray flare that was correlated with the passage of a superluminal knot through the inner jet of quasar PKS 1510-089. Synthetic light-curves produced by this new model are compared to the observed light-curves from this event. In addition, we present polarimetric observations that point to the existence of a jet sheath in the quasar 3C 273. A rough estimate of the bolometric luminosity of the sheath results in a value of ~10^45 erg s^-1 10% of the jet luminosity). This inferred sheath luminosity indicates that the jet sheath in 3C 273 can provide a significant source of seed photons that need to be taken into account when modeling the non-thermal emission due to inverse-Compton scattering processes. Funding for this research was provided by an NSERC PGS D2 Doctoral Fellowship and NASA under Fermi Guest Investigator grants NNX12AO79G and NNX12AO59G.

  4. Watch Out for Falling Plasma

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-12-01

    The path taken by the falling fragment in the June 2011 event. [Adapted from Petralia et al. 2016]Sometimes plasma emitted from the Sun doesnt escape into space, but instead comes crashing back down to the solar surface. What can observations and models of this process tell us about how the plasma falls and the local conditions on the Sun?Fallback from a FlareOn 7 June 2011, an M-class flare erupted from the solar surface. As the Solar Dynamics Observatorys Atmospheric Imaging Assembly looked on, plasma fragments from the flare arced away from the Sun and then fell back to the surface.Some fragments fell back where the Suns magnetic field was weak, returning directly to the surface. But others fell within active regions, where they crashed into the Suns magnetic field lines, brightening the channels and funneling along them through the dense corona and back to the Suns surface.The authors model of the falling blobs at several different times in their simulation. The blobs get disrupted when they encounter the field lines, and are then funneled along the channels to the solar surface. [Adapted from Petralia et al. 2016]This sort of flare and fall-back event is a common occurrence with the Sun, and SDOs observations of the June 2011 event present an excellent opportunity to understand the process better. A team of scientists led by Antonino Petralia (University of Palermo, Italy and INAF-OAPA) modeled this event in an effort to learn more about how the falling plasma interacts with strong magnetic fields above the solar surface.Magnetic Fields as GuidesPetralia and collaborators used three-dimensional magnetohydrodynamical modeling to attempt to reproduce the observations of this event. They simulated blobs of plasma as they fall back to the solar surface and interact with magnetic field lines over a range of different conditions.The team found that only simulations that assume a relatively strong magnetic field resulted in the blobs funneling along a channel to the Suns surface; with weaker fields the blobs to simply broke through the field lines.The observations were best reproduced by downfall channeled in a million-Kelvin coronal loop confined by a magnetic field of 1020 Gauss. In this scenario, a falling fragment is deviated from its path by the field and disrupted. Its then channeled along the magnetic flux tube, driving a shock and heating in the tube ahead of it which, the authors find, is the cause the observed brightening that occurs ahead of the actual plasma passage.Petralia and collaborators point out that this new mechanism for brightening downflows channeled by the magnetic field is applicable not only in our Sun, but also in young, accreting stars. Events like these can therefore work as probes of the ambient atmosphere of such stars, providing information about the local plasma density and magnetic field.BonusCheck out the two awesome videos below! In the first one, you can see the SDO/AIA observations of the plasma fragment falling back down and hitting a magnetic channel, which lights up as the shock propagates. In the second one, you can see one of the authors models of this process; this video renders the density of blobs of plasma as they fall and strike magnetic field lines.http://cdn.iopscience.com/images/0004-637X/832/1/2/Full/apjaa3f55f1_video.mp4http://cdn.iopscience.com/images/0004-637X/832/1/2/Full/apjaa3f55f5_video.mp4CitationA. Petralia et al 2016 ApJ 832 2. doi:10.3847/0004-637X/832/1/2

  5. Curious Case of a Stripped Elliptical Galaxy

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-05-01

    MUSE fields of view (1 1 for each square) are superimposed on a pseudo-color image of the elliptical galaxy in Abell 2670. The blue blobs lie in the opposite direction to the galactic center. [Sheen et al. 2017]An elliptical galaxy in the cluster Abell 2670 has been discovered with some unexpected features. What conditions led to this galaxys unusual morphology?Unexpected JellyfishWe often see galaxies that have been disrupted or reshaped due to their motion within a cluster but these are usually late-type galaxies like our own. Such gas-rich galaxies are distorted by ram pressure as they fall into the cluster center, growing long tails of stripped gas and young stars that earn them the name jellyfish galaxies.But early-type, elliptical galaxies have long since used up or cleared out most of their gas, and they correspondingly form very few new stars. Its therefore unsurprising that theyve never before been spotted to have jellyfish-like features.Panels a and b show zoomed-in observations of some of the star-forming blobs with tadpole-like morphology. Panel c shows a schematic illustration of how ram-pressure stripping causes this shape. [Adapted from Sheen et al. 2017]New deep observations of an elliptical galaxy in the cluster Abell 2670, however, have revealed some unexpected structures for an early-type galaxy. Led by Yun-Kyeong Sheen (Korea Astronomy and Space Science Institute), a team of scientists now reports on the optical and spectroscopic observations of this galaxy, made with the MUSE instrument on the Very Large Telescope in Chile.Tadpole BlobsThese observations reveal a number of features, including starbursts at the galactic center, 80-parsec-long tails of ionized gas, disturbed halo features, and several blue star-forming blobs with tadpole-like morphology in the surrounding region. The blobs have stellar tails that point in the direction of motion of the galaxy (toward the cluster center) and streams of ionized gas that point in the opposite direction.All of these features are signs that this galaxy is being ram-pressurestripped as it falls into the center of the cluster. The star-forming blobs, for example, are exhibiting classic ram-pressure-stripping behavior: as a galaxy falls into the cluster center, streams of ionized gas blow downwind, and stars (which dont respond as easily to the force of the wind) are left behind in a stream pointing upwind.Gas from a Merger?An example of a tidal tail drawn out from a disrupted late-type galaxy. The disrupted galaxy in Abell 2670 is, in contrast, an early-type, elliptical galaxy that should be gas-poor. [H. Ford, JHU/M. Clampin, STScI/G. Hartig, STScI/G. Illingworth, UCO, Lick/ACS Science Team/ESA/NASA]But if this is an elliptical galaxy, where did the gas come from for the tails and the galactic-center star formation? To rule out the obvious, the authors first check that this galaxy really is an early-type elliptical. The galaxys color (reddened), morphology (elliptical and no sign of a stellar disk), and stellar velocities (no sign of stellar rotation) all confirm this.The authors therefore speculate that the galaxy recently underwent a wet merger a merger with a companion galaxy that was gas-rich. Much of this gas was driven to the center of the elliptical galaxy in the merger, and its now responsible for the starbursts there.Well hopefully be able to draw stronger conclusions about this unusual galaxy after additional investigation into the amount of gas it contains and the galaxys star formation rate. In the meantime, this stripped elliptical makes for an intriguing puzzle!CitationYun-Kyeong Sheen et al 2017 ApJL 840 L7. doi:10.3847/2041-8213/aa6d79

  6. HUBBLE REVEALS STELLAR FIREWORKS ACCOMPANYING GALAXY COLLISION

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This Hubble Space Telescope image provides a detailed look at a brilliant 'fireworks show' at the center of a collision between two galaxies. Hubble has uncovered over 1,000 bright, young star clusters bursting to life as a result of the head-on wreck. [Left] A ground-based telescopic view of the Antennae galaxies (known formally as NGC 4038/4039) - so named because a pair of long tails of luminous matter, formed by the gravitational tidal forces of their encounter, resembles an insect's antennae. The galaxies are located 63 million light-years away in the southern constellation Corvus. [Right] The respective cores of the twin galaxies are the orange blobs, left and right of image center, crisscrossed by filaments of dark dust. A wide band of chaotic dust, called the overlap region, stretches between the cores of the two galaxies. The sweeping spiral- like patterns, traced by bright blue star clusters, shows the result of a firestorm of star birth activity which was triggered by the collision. This natural-color image is a composite of four separately filtered images taken with the Wide Field Planetary Camera 2 (WFPC2), on January 20, 1996. Resolution is 15 light-years per pixel (picture element). Credit: Brad Whitmore (STScI), and NASA

  7. Efficient and Scalable Graph Similarity Joins in MapReduce

    PubMed Central

    Chen, Yifan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135

  8. Efficient and scalable graph similarity joins in MapReduce.

    PubMed

    Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.

  9. Designing a composite correlation filter based on iterative optimization of training images for distortion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.

    2017-06-01

    We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.

  10. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm

    PubMed Central

    Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel

    2016-01-01

    This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (A z) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with A z = 0.9502 over a training set of 40 images and A z = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422

  11. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  12. Spectral information enhancement using wavelet-based iterative filtering for in vivo gamma spectrometry.

    PubMed

    Paul, Sabyasachi; Sarkar, P K

    2013-04-01

    Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.

  13. Digital filtering and model updating methods for improving the robustness of near-infrared multivariate calibrations.

    PubMed

    Kramer, Kirsten E; Small, Gary W

    2009-02-01

    Fourier transform near-infrared (NIR) transmission spectra are used for quantitative analysis of glucose for 17 sets of prediction data sampled as much as six months outside the timeframe of the corresponding calibration data. Aqueous samples containing physiological levels of glucose in a matrix of bovine serum albumin and triacetin are used to simulate clinical samples such as blood plasma. Background spectra of a single analyte-free matrix sample acquired during the instrumental warm-up period on the prediction day are used for calibration updating and for determining the optimal frequency response of a preprocessing infinite impulse response time-domain digital filter. By tuning the filter and the calibration model to the specific instrumental response associated with the prediction day, the calibration model is given enhanced ability to operate over time. This methodology is demonstrated in conjunction with partial least squares calibration models built with a spectral range of 4700-4300 cm(-1). By using a subset of the background spectra to evaluate the prediction performance of the updated model, projections can be made regarding the success of subsequent glucose predictions. If a threshold standard error of prediction (SEP) of 1.5 mM is used to establish successful model performance with the glucose samples, the corresponding threshold for the SEP of the background spectra is found to be 1.3 mM. For calibration updating in conjunction with digital filtering, SEP values of all 17 prediction sets collected over 3-178 days displaced from the calibration data are below 1.5 mM. In addition, the diagnostic based on the background spectra correctly assesses the prediction performance in 16 of the 17 cases.

  14. The Blob That Ate Physics

    ERIC Educational Resources Information Center

    Thomsen, Dietrick E.

    1975-01-01

    Summarizes some thoughts of Stephen W. Hawking who proposes that certain kinds of communications across the event horizon are possible, that they lead to the evaporation or explosion of the black hole, and, therefore, that classical or quantum mechanical causality has no meaning. (GS)

  15. Iterative deblending of simultaneous-source data using a coherency-pass shaping operator

    NASA Astrophysics Data System (ADS)

    Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Zhang, Dong; Li, Chao; Pan, Xiao; Chen, Yangkang

    2017-10-01

    Simultaneous-source acquisition helps greatly boost an economic saving, while it brings an unprecedented challenge of removing the crosstalk interference in the recorded seismic data. In this paper, we propose a novel iterative method to separate the simultaneous source data based on a coherency-pass shaping operator. The coherency-pass filter is used to constrain the model, that is, the unblended data to be estimated, in the shaping regularization framework. In the simultaneous source survey, the incoherent interference from adjacent shots greatly increases the rank of the frequency domain Hankel matrix that is formed from the blended record. Thus, the method based on rank reduction is capable of separating the blended record to some extent. However, the shortcoming is that it may cause residual noise when there is strong blending interference. We propose to cascade the rank reduction and thresholding operators to deal with this issue. In the initial iterations, we adopt a small rank to severely separate the blended interference and a large thresholding value as strong constraints to remove the residual noise in the time domain. In the later iterations, since more and more events have been recovered, we weaken the constraint by increasing the rank and shrinking the threshold to recover weak events and to guarantee the convergence. In this way, the combined rank reduction and thresholding strategy acts as a coherency-pass filter, which only passes the coherent high-amplitude component after rank reduction instead of passing both signal and noise in traditional rank reduction based approaches. Two synthetic examples are tested to demonstrate the performance of the proposed method. In addition, the application on two field data sets (common receiver gathers and stacked profiles) further validate the effectiveness of the proposed method.

  16. Measurement Techniques for Transmit Source Clock Jitter for Weak Serial RF Links

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Schlesinger, Adam M.

    2010-01-01

    Techniques for filtering clock jitter measurements are developed, in the context of controlling data modulation jitter on an RF carrier to accommodate low signal-to-noise ratio thresholds of high-performance error correction codes. Measurement artifacts from sampling are considered, and a tutorial on interpretation of direct readings is included.

  17. GOES Cloud Detection at the Global Hydrology and Climate Center

    NASA Technical Reports Server (NTRS)

    Laws, Kevin; Jedlovec, Gary J.; Arnold, James E. (Technical Monitor)

    2002-01-01

    The bi-spectral threshold (BTH) for cloud detection and height assignment is now operational at NASA's Global Hydrology and Climate Center (GHCC). This new approach is similar in principle to the bi-spectral spatial coherence (BSC) method with improvements made to produce a more robust cloud-filtering algorithm for nighttime cloud detection and subsequent 24-hour operational cloud top pressure assignment. The method capitalizes on cloud and surface emissivity differences from the GOES 3.9 and 10.7-micrometer channels to distinguish cloudy from clear pixels. Separate threshold values are determined for day and nighttime detection, and applied to a 20-day minimum composite difference image to better filter background effects and enhance differences in cloud properties. A cloud top pressure is assigned to each cloudy pixel by referencing the 10.7-micrometer channel temperature to a thermodynamic profile from a locally -run regional forecast model. This paper and supplemental poster will present an objective validation of nighttime cloud detection by the BTH approach in comparison with previous methods. The cloud top pressure will be evaluated by comparing to the NESDIS operational CO2 slicing approach.

  18. a Method of Generating dem from Dsm Based on Airborne Insar Data

    NASA Astrophysics Data System (ADS)

    Lu, W.; Zhang, J.; Xue, G.; Wang, C.

    2018-04-01

    Traditional methods of terrestrial survey to acquire DEM cannot meet the requirement of acquiring large quantities of data in real time, but the DSM can be quickly obtained by using the dual antenna synthetic aperture radar interferometry and the DEM generated by the DSM is more fast and accurate. Therefore it is most important to acquire DEM from DSM based on airborne InSAR data. This paper aims to the method that generate DEM from DSM accurately. Two steps in this paper are applied to acquire accurate DEM. First of all, when the DSM is generated by interferometry, unavoidable factors such as overlay and shadow will produce gross errors to affect the data accuracy, so the adaptive threshold segmentation method is adopted to remove the gross errors and the threshold is selected according to the coherence of the interferometry. Secondly DEM will be generated by the progressive triangulated irregular network densification filtering algorithm. Finally, experimental results are compared with the existing high-precision DEM results. The results show that this method can effectively filter out buildings, vegetation and other objects to obtain the high-precision DEM.

  19. Spectral resampling based on user-defined inter-band correlation filter: C3 and C4 grass species classification

    NASA Astrophysics Data System (ADS)

    Adjorlolo, Clement; Mutanga, Onisimo; Cho, Moses A.; Ismail, Riyad

    2013-04-01

    In this paper, a user-defined inter-band correlation filter function was used to resample hyperspectral data and thereby mitigate the problem of multicollinearity in classification analysis. The proposed resampling technique convolves the spectral dependence information between a chosen band-centre and its shorter and longer wavelength neighbours. Weighting threshold of inter-band correlation (WTC, Pearson's r) was calculated, whereby r = 1 at the band-centre. Various WTC (r = 0.99, r = 0.95 and r = 0.90) were assessed, and bands with coefficients beyond a chosen threshold were assigned r = 0. The resultant data were used in the random forest analysis to classify in situ C3 and C4 grass canopy reflectance. The respective WTC datasets yielded improved classification accuracies (kappa = 0.82, 0.79 and 0.76) with less correlated wavebands when compared to resampled Hyperion bands (kappa = 0.76). Overall, the results obtained from this study suggested that resampling of hyperspectral data should account for the spectral dependence information to improve overall classification accuracy as well as reducing the problem of multicollinearity.

  20. Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz

    This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less

  1. Position-specific automated processing of V3 env ultra-deep pyrosequencing data for predicting HIV-1 tropism

    PubMed Central

    Jeanne, Nicolas; Saliou, Adrien; Carcenac, Romain; Lefebvre, Caroline; Dubois, Martine; Cazabat, Michelle; Nicot, Florence; Loiseau, Claire; Raymond, Stéphanie; Izopet, Jacques; Delobel, Pierre

    2015-01-01

    HIV-1 coreceptor usage must be accurately determined before starting CCR5 antagonist-based treatment as the presence of undetected minor CXCR4-using variants can cause subsequent virological failure. Ultra-deep pyrosequencing of HIV-1 V3 env allows to detect low levels of CXCR4-using variants that current genotypic approaches miss. However, the computation of the mass of sequence data and the need to identify true minor variants while excluding artifactual sequences generated during amplification and ultra-deep pyrosequencing is rate-limiting. Arbitrary fixed cut-offs below which minor variants are discarded are currently used but the errors generated during ultra-deep pyrosequencing are sequence-dependant rather than random. We have developed an automated processing of HIV-1 V3 env ultra-deep pyrosequencing data that uses biological filters to discard artifactual or non-functional V3 sequences followed by statistical filters to determine position-specific sensitivity thresholds, rather than arbitrary fixed cut-offs. It allows to retain authentic sequences with point mutations at V3 positions of interest and discard artifactual ones with accurate sensitivity thresholds. PMID:26585833

  2. Position-specific automated processing of V3 env ultra-deep pyrosequencing data for predicting HIV-1 tropism.

    PubMed

    Jeanne, Nicolas; Saliou, Adrien; Carcenac, Romain; Lefebvre, Caroline; Dubois, Martine; Cazabat, Michelle; Nicot, Florence; Loiseau, Claire; Raymond, Stéphanie; Izopet, Jacques; Delobel, Pierre

    2015-11-20

    HIV-1 coreceptor usage must be accurately determined before starting CCR5 antagonist-based treatment as the presence of undetected minor CXCR4-using variants can cause subsequent virological failure. Ultra-deep pyrosequencing of HIV-1 V3 env allows to detect low levels of CXCR4-using variants that current genotypic approaches miss. However, the computation of the mass of sequence data and the need to identify true minor variants while excluding artifactual sequences generated during amplification and ultra-deep pyrosequencing is rate-limiting. Arbitrary fixed cut-offs below which minor variants are discarded are currently used but the errors generated during ultra-deep pyrosequencing are sequence-dependant rather than random. We have developed an automated processing of HIV-1 V3 env ultra-deep pyrosequencing data that uses biological filters to discard artifactual or non-functional V3 sequences followed by statistical filters to determine position-specific sensitivity thresholds, rather than arbitrary fixed cut-offs. It allows to retain authentic sequences with point mutations at V3 positions of interest and discard artifactual ones with accurate sensitivity thresholds.

  3. Image processing of vaporizing GDI sprays: a new curvature-based approach

    NASA Astrophysics Data System (ADS)

    Lazzaro, Maurizio; Ianniello, Roberto

    2018-01-01

    This article introduces an innovative method for the segmentation of Mie-scattering and schlieren images of GDI sprays. The contours of the liquid phase are obtained by segmenting the scattering images of the spray by means of optimal filtering of the image, relying on variational methods, and an original thresholding procedure based on an iterative application of Otsu’s method. The segmentation of schlieren images, to get the contours of the spray vapour phase, is obtained by exploiting the surface curvature of the image to strongly enhance the intensity texture due to the vapour density gradients. This approach allows one to unambiguously discern the whole vapour phase of the spray from the background. Additional information about the spray liquid phase can be obtained by thresholding filtered schlieren images. The potential of this method has been substantiated in the segmentation of schlieren and scattering images of a GDI spray of isooctane. The fuel, heated to 363 K, was injected into nitrogen at a density of 1.12 and 3.5 kg m-3 with temperatures of 333 K and 573 K.

  4. Orientation tuning of contrast masking caused by motion streaks.

    PubMed

    Apthorp, Deborah; Cass, John; Alais, David

    2010-08-01

    We investigated whether the oriented trails of blur left by fast-moving dots (i.e., "motion streaks") effectively mask grating targets. Using a classic overlay masking paradigm, we varied mask contrast and target orientation to reveal underlying tuning. Fast-moving Gaussian blob arrays elevated thresholds for detection of static gratings, both monoptically and dichoptically. Monoptic masking at high mask (i.e., streak) contrasts is tuned for orientation and exhibits a similar bandwidth to masking functions obtained with grating stimuli (∼30 degrees). Dichoptic masking fails to show reliable orientation-tuned masking, but dichoptic masks at very low contrast produce a narrowly tuned facilitation (∼17 degrees). For iso-oriented streak masks and grating targets, we also explored masking as a function of mask contrast. Interestingly, dichoptic masking shows a classic "dipper"-like TVC function, whereas monoptic masking shows no dip and a steeper "handle". There is a very strong unoriented component to the masking, which we attribute to transiently biased temporal frequency masking. Fourier analysis of "motion streak" images shows interesting differences between dichoptic and monoptic functions and the information in the stimulus. Our data add weight to the growing body of evidence that the oriented blur of motion streaks contributes to the processing of fast motion signals.

  5. Relativistic MHD simulations of collision-induced magnetic dissipation in Poynting-flux-dominated jets/outflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Wei

    2015-07-21

    The question of the energy composition of the jets/outflows in high-energy astrophysical systems, e.g. GRBs, AGNs, is taken up first: Matter-flux-dominated (MFD), σ < 1, and/or Poynting-flux-dominated (PFD), σ >1? The standard fireball IS model and dissipative photosphere model are MFD, while the ICMART (Internal-Collision-induced MAgnetic Reconnection and Turbulence) model is PFD. Motivated by ICMART model and other relevant problems, such as “jets in a jet” model of AGNs, the author investigates the models from the EMF energy dissipation efficiency, relativistic outflow generation, and σ evolution points of view, and simulates collisions between high-σ blobs to mimic the situation ofmore » the interactions inside the PFD jets/outflows by using a 3D SRMHD code which solves the conservative form of the ideal MHD equations. σ b,f is calculated from the simulation results (threshold = 1). The efficiency obtained from this hybrid method is similar to the efficiency got from the energy evolution of the simulations (35.2%). Efficiency is nearly σ independent, which is also confirmed by the hybrid method. σ b,i - σ b,f provides an interesting linear relationship. Results of several parameter studies of EMF energy dissipation efficiency are shown.« less

  6. Sub-threshold standard cell library design for ultra-low power biomedical applications.

    PubMed

    Li, Ming-Zhong; Ieong, Chio-In; Law, Man-Kay; Mak, Pui-In; Vai, Mang-I; Martins, Rui P

    2013-01-01

    Portable/Implantable biomedical applications usually exhibit stringent power budgets for prolonging battery life time, but loose operating frequency requirements due to small bio-signal bandwidths, typically below a few kHz. The use of sub-threshold digital circuits is ideal in such scenario to achieve optimized power/speed tradeoffs. This paper discusses the design of a sub-threshold standard cell library using a standard 0.18-µm CMOS technology. A complete library of 56 standard cells is designed and the methodology is ensured through schematic design, transistor width scaling and layout design, as well as timing, power and functionality characterization. Performance comparison between our sub-threshold standard cell library and a commercial standard cell library using a 5-stage ring oscillator and an ECG designated FIR filter is performed. Simulation results show that our library achieves a total power saving of 95.62% and a leakage power reduction of 97.54% when compared with the same design implemented by the commercial standard cell library (SCL).

  7. Assessment of central auditory processing in a group of workers exposed to solvents.

    PubMed

    Fuente, Adrian; McPherson, Bradley; Muñoz, Verónica; Pablo Espina, Juan

    2006-12-01

    Despite having normal hearing thresholds and speech recognition thresholds, results for central auditory tests were abnormal in a group of workers exposed to solvents. Workers exposed to solvents may have difficulties in everyday listening situations that are not related to a decrement in hearing thresholds. A central auditory processing disorder may underlie these difficulties. To study central auditory processing abilities in a group of workers occupationally exposed to a mix of organic solvents. Ten workers exposed to a mix of organic solvents and 10 matched non-exposed workers were studied. The test battery comprised pure-tone audiometry, tympanometry, acoustic reflex measurement, acoustic reflex decay, dichotic digit, pitch pattern sequence, masking level difference, filtered speech, random gap detection and hearing-in-noise tests. All the workers presented normal hearing thresholds and no signs of middle ear abnormalities. Workers exposed to solvents had lower results in comparison with the control group and previously reported normative data, in the majority of the tests.

  8. Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.

  9. Smart filters: from VIS/NIR to MW/LWIR protection

    NASA Astrophysics Data System (ADS)

    Donval, Ariela; Fisher, Tali; Lipman, Ofir; Oron, Moshe

    2014-06-01

    New development of imaging systems implies the use of multi band wavelength, VIS and IR, for imaging enhancement and more data presenting. Some of those systems, such as

  10. Blood collected on filter paper for wildlife serology: detecting antibodies to Neospora caninum, West Nile virus, and five bovine viruses in reindeer.

    PubMed

    Curry, Patricia S; Ribble, Carl; Sears, William C; Hutchins, Wendy; Orsel, Karin; Godson, Dale; Lindsay, Robbin; Dibernardo, Antonia; Kutz, Susan J

    2014-04-01

    We compared Nobuto filter paper (FP) whole-blood samples to serum for detecting antibodies to seven pathogens in reindeer (Rangifer tarandus tarandus). Serum and FP samples were collected from captive reindeer in 2008-2009. Sample pairs (serum and FP eluates) were assayed in duplicate at diagnostic laboratories with the use of competitive enzyme-linked immunosorbent assays (cELISAs) for Neospora caninum and West Nile virus (WNV); indirect ELISA (iELISAs) for bovine herpesvirus type 1 (BHV-1), parainfluenza virus type 3 (PI-3), and bovine respiratory syncytial virus (BRSV); and virus neutralization (VN) for bovine viral diarrhea virus (BVDV) types I and II. Assay thresholds were evidence-based values employed by each laboratory. Comparable performance to serum was defined as FP sensitivity and specificity ≥ 80%. Filter-paper specificity estimates ranged from 92% in the cELISAs for N. caninum and WNV to 98% in the iELISAs for PI-3 and BRSV. Sensitivity was >85% for five tests (most ≥ 95%) but was insufficient (71-82%) for the PI-3 and BRSV iELISAs. Lowering the threshold for FP samples in these two ELISAs raised sensitivity to ≥ 87% and reduced specificity slightly (≥ 90% in three of the four test runs). Sample size limited the precision of some performance estimates. Based on the criteria of sensitivity and specificity ≥ 80%, and using adjusted FP thresholds for PI-3 and BRSV, FP sensitivity and specificity were comparable to serum in all seven assays. A potential limitation of FP is reduced sensitivity in tests that require undiluted serum (i.e., N. caninum cELISA and BVDV VNs). Possible toxicity to the assay cell layer in VN requires investigation. Results suggested that cELISA is superior to iELISA for detecting antibodies in FP samples from reindeer and other Rangifer tarandus subspecies. Our findings expand the potential utility of FP sampling from wildlife.

  11. Quantitative measurement of interocular suppression in anisometropic amblyopia: a case-control study.

    PubMed

    Li, Jinrong; Hess, Robert F; Chan, Lily Y L; Deng, Daming; Yang, Xiao; Chen, Xiang; Yu, Minbin; Thompson, Benjamin

    2013-08-01

    The aims of this study were to assess (1) the relationship between interocular suppression and visual function in patients with anisometropic amblyopia, (2) whether suppression can be simulated in matched controls using monocular defocus or neutral density filters, (3) the effects of spectacle or rigid gas-permeable contact lens correction on suppression in patients with anisometropic amblyopia, and (4) the relationship between interocular suppression and outcomes of occlusion therapy. Case-control study (aims 1-3) and cohort study (aim 4). Forty-five participants with anisometropic amblyopia and 45 matched controls (mean age, 8.8 years for both groups). Interocular suppression was assessed using Bagolini striated lenses, neutral density filters, and an objective psychophysical technique that measures the amount of contrast imbalance between the 2 eyes that is required to overcome suppression (dichoptic motion coherence thresholds). Visual acuity was assessed using a logarithm minimum angle of resolution tumbling E chart and stereopsis using the Randot preschool test. Interocular suppression assessed using dichoptic motion coherence thresholds. Patients exhibited significantly stronger suppression than controls, and stronger suppression was correlated significantly with poorer visual acuity in amblyopic eyes. Reducing monocular acuity in controls to match that of cases using neutral density filters (luminance reduction) resulted in levels of interocular suppression comparable with that in patients. This was not the case for monocular defocus (optical blur). Rigid gas-permeable contact lens correction resulted in less suppression than spectacle correction, and stronger suppression was associated with poorer outcomes after occlusion therapy. Interocular suppression plays a key role in the visual deficits associated with anisometropic amblyopia and can be simulated in controls by inducing a luminance difference between the eyes. Accurate quantification of suppression using the dichoptic motion coherence threshold technique may provide useful information for the management and treatment of anisometropic amblyopia. Proprietary or commercial disclosure may be found after the references. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  12. High-Resolution Experimental Investigation of mass transfer enhancement by chemical oxidation from DNAPL entrapped in variable-aperture fractures

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Rajaram, H.; Detwiler, R. L.; Jones, T.

    2012-12-01

    Permanganate oxidation of DNAPL- contaminated fractured rock is an effective remediation technology. Permanganate ion reacts with dissolved DNAPL in a bi-molecular oxidation-reduction reaction. The consumption of dissolved DNAPL in this reaction results in increased concentration gradients away from the free-phase DNAPL, resulting in reaction-enhanced mass transfer, which accelerates contaminant removal. The specific objective of our research was to perform high-resolution non-intrusive experimental studies of permanganate oxidation in a 15.24 × 15.24 cm, transparent, analog, variable-aperture fracture with complex initial TCE entrapped phase geometry. Our experimental system uses light-transmission techniques to accurately measure both fracture aperture and the evolution of individual entrapped DNAPL blobs during the remediation experiments at high resolution (pixel size : 6.2×10-3 cm). Three experiments were performed with different flow rates and permanganate inflow concentrations to observe DNAPL-permanganate interactions across a broader range of conditions. Prior to initiating each experiment, the aperture field within the fracture was measured. The oxidation experiment was initiated by TCE injection into the water saturated fracture till the TCE reached the outflow end, followed by water re-injection through the fracture. The flowing water mobilized some TCE. We continued injection of water till TCE mobilization ceased, leaving behind the residual TCE entrapped within the variable-aperture fracture. Subsequently, permanganate injection through the fracture resulted in propagation of a fingered reaction front into the fracture. We developed image processing algorithms to analyze the evolution of DNAPL phase geometry over the duration of the experiment. The permanganate consumption rate varied significantly within the fracture due to the complex flow and DNAPL concentration fields. Precipitated MnO2 was clearly evident on the downstream side of DNAPL blobs near the inflow boundary indicating high reaction rates in these regions. This behavior is explained by the diversion of permanganate around entrapped DNAPL blobs and downstream advection of dissolved DNAPL. Our results indicate that the total rate of mass transfer from the DNAPL blobs is higher at early times, when not much MnO2 has formed and precipitated. With time, MnO2 precipitation in the fracture leads to changes the aperture field and flow field. Precipitated MnO2 around TCE blobs also decreases the DNAPL accessible surface area. By comparing the results of three experiments, we conclude that low permanganate concentrations and high flow rates lead to more efficient DNAPL remediation, resulting from the fact that under these conditions there would be slower MnO2 formation and less precipitation within the fracture. We also present results on the time-evolution of fracture-scale permanganate consumption and DNAPL removal rates. The experimental observations are being used to develop improved high-resolution numerical models of reactive transport in variable-aperture fractures. The overall goal is to relate the coupled processes of DNAPL removal, permanganate consumption, MnO2 formation and associated changes in aperture and interface area; to derive fracture-scale effective representations of these processes.

  13. L'astronomie dans le monde

    NASA Astrophysics Data System (ADS)

    Manfroid, J.

    2009-06-01

    L'ESA en route vers les origines de l'univers; Record de distance; Blob primordial; Novae; Expansion de l'univers; Plat ou pas?; L'eau sur Mars; Bombardement massif; M87; CoRoT; EX Lupi; Première pour ALMA; Kohoutek 4-55; Arp 194

  14. Non-invasive measurement of pulse wave velocity using transputer-based analysis of Doppler flow audio signals.

    PubMed

    Stewart, W R; Ramsey, M W; Jones, C J

    1994-08-01

    A system for the measurement of arterial pulse wave velocity is described. A personal computer (PC) plug-in transputer board is used to process the audio signals from two pocket Doppler ultrasound units. The transputer is used to provide a set of bandpass digital filters on two channels. The times of excursion of power through thresholds in each filter are recorded and used to estimate the onset of systolic flow. The system does not require an additional spectrum analyser and can work in real time. The transputer architecture provides for easy integration into any wider physiological measurement system.

  15. Two-Microphone Spatial Filtering Improves Speech Reception for Cochlear-Implant Users in Reverberant Conditions With Multiple Noise Sources

    PubMed Central

    2014-01-01

    This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772

  16. Hydraulic effects in a radiative atmosphere with ionization

    NASA Astrophysics Data System (ADS)

    Bhat, P.; Brandenburg, A.

    2016-03-01

    Context. In his 1978 paper, Eugene Parker postulated the need for hydraulic downward motion to explain magnetic flux concentrations at the solar surface. A similar process has also recently been seen in simplified (e.g., isothermal) models of flux concentrations from the negative effective magnetic pressure instability (NEMPI). Aims: We study the effects of partial ionization near the radiative surface on the formation of these magnetic flux concentrations. Methods: We first obtain one-dimensional (1D) equilibrium solutions using either a Kramers-like opacity or the H- opacity. The resulting atmospheres are then used as initial conditions in two-dimensional (2D) models where flows are driven by an imposed gradient force that resembles a localized negative pressure in the form of a blob. To isolate the effects of partial ionization and radiation, we ignore turbulence and convection. Results: Because of partial ionization, an unstable stratification always forms near the surface. We show that the extrema in the specific entropy profiles correspond to the extrema in the degree of ionization. In the 2D models without partial ionization, strong flux concentrations form just above the height where the blob is placed. Interestingly, in models with partial ionization, such flux concentrations always form at the surface well above the blob. This is due to the corresponding negative gradient in specific entropy. Owing to the absence of turbulence, the downflows reach transonic speeds. Conclusions: We demonstrate that, together with density stratification, the imposed source of negative pressure drives the formation of flux concentrations. We find that the inclusion of partial ionization affects the entropy profile dramatically, causing strong flux concentrations to form closer to the surface. We speculate that turbulence effects are needed to limit the strength of flux concentrations and homogenize the specific entropy to a stratification that is close to marginal.

  17. EXTERNAL COMPTON SCATTERING IN BLAZAR JETS AND THE LOCATION OF THE GAMMA-RAY EMITTING REGION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finke, Justin D., E-mail: justin.finke@nrl.navy.mil

    2016-10-20

    I study the location of the γ -ray emission in blazar jets by creating a Compton-scattering approximation that is valid for all anisotropic radiation fields in the Thomson through Klein–Nishina regimes, is highly accurate, and can speed up numerical calculations by up to a factor of ∼10. I apply this approximation to synchrotron self-Compton, external Compton scattering of photons from the accretion disk, broad line region (BLR), and dust torus. I use a stratified BLR model and include detailed Compton-scattering calculations of a spherical and flattened BLR. I create two dust torus models, one where the torus is an annulusmore » and one where it is an extended disk. I present detailed calculations of the photoabsorption optical depth using my detailed BLR and dust torus models, including the full angle dependence. I apply these calculations to the emission from a relativistically moving blob traveling through these radiation fields. The ratio of γ -ray to optical flux produces a predictable pattern that could help locate the γ -ray emission region. I show that the bright flare from 3C 454.3 in 2010 November detected by the Fermi Large Area Telescope is unlikely to originate from a single blob inside the BLR. This is because it moves outside the BLR in a time shorter than the flare duration, although emission by multiple blobs inside the BLR is possible. Also, γ -rays are unlikely to originate from outside of the BLR, due to the scattering of photons from an extended dust torus, since the cooling timescale would be too long to explain the observed short variability.« less

  18. Vortex methods and vortex statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chorin, A.J.

    Vortex methods originated from the observation that in incompressible, inviscid, isentropic flow vorticity (or, more accurately, circulation) is a conserved quantity, as can be readily deduced from the absence of tangential stresses. Thus if the vorticity is known at time t = 0, one can deduce the flow at a later time by simply following it around. In this narrow context, a vortex method is a numerical method that makes use of this observation. Even more generally, the analysis of vortex methods leads, to problems that are closely related to problems in quantum physics and field theory, as well asmore » in harmonic analysis. A broad enough definition of vortex methods ends up by encompassing much of science. Even the purely computational aspects of vortex methods encompass a range of ideas for which vorticity may not be the best unifying theme. The author restricts himself in these lectures to a special class of numerical vortex methods, those that are based on a Lagrangian transport of vorticity in hydrodynamics by smoothed particles (``blobs``) and those whose understanding contributes to the understanding of blob methods. Vortex methods for inviscid flow lead to systems of ordinary differential equations that can be readily clothed in Hamiltonian form, both in three and two space dimensions, and they can preserve exactly a number of invariants of the Euler equations, including topological invariants. Their viscous versions resemble Langevin equations. As a result, they provide a very useful cartoon of statistical hydrodynamics, i.e., of turbulence, one that can to some extent be analyzed analytically and more importantly, explored numerically, with important implications also for superfluids, superconductors, and even polymers. In the authors view, vortex ``blob`` methods provide the most promising path to the understanding of these phenomena.« less

  19. An immersed boundary-lattice Boltzmann model for biofilm growth and its impact on the NAPL dissolution in porous media

    NASA Astrophysics Data System (ADS)

    Benioug, M.; Yang, X.

    2017-12-01

    The evolution of microbial phase within porous medium is a complex process that involves growth, mortality, and detachment of the biofilm or attachment of moving cells. A better understanding of the interactions among biofilm growth, flow and solute transport and a rigorous modeling of such processes are essential for a more accurate prediction of the fate of pollutants (e.g. NAPLs) in soils. However, very few works are focused on the study of such processes in multiphase conditions (oil/water/biofilm systems). Our proposed numerical model takes into account the mechanisms that control bacterial growth and its impact on the dissolution of NAPL. An Immersed Boundary - Lattice Boltzmann Model (IB-LBM) is developed for flow simulations along with non-boundary conforming finite volume methods (volume of fluid and reconstruction methods) used for reactive solute transport. A sophisticated cellular automaton model is also developed to describe the spatial distribution of bacteria. A series of numerical simulations have been performed on complex porous media. A quantitative diagram representing the transitions between the different biofilm growth patterns is proposed. The bioenhanced dissolution of NAPL in the presence of biofilms is simulated at the pore scale. A uniform dissolution approach has been adopted to describe the temporal evolution of trapped blobs. Our simulations focus on the dissolution of NAPL in abiotic and biotic conditions. In abiotic conditions, we analyze the effect of the spatial distribution of NAPL blobs on the dissolution rate under different assumptions (blobs size, Péclet number). In biotic conditions, different conditions are also considered (spatial distribution, reaction kinetics, toxicity) and analyzed. The simulated results are consistent with those obtained from the literature.

  20. “Orphan” γ-Ray Flares and Stationary Sheaths of Blazar Jets

    NASA Astrophysics Data System (ADS)

    MacDonald, Nicholas R.; Jorstad, Svetlana G.; Marscher, Alan P.

    2017-11-01

    Blazars exhibit flares across the entire electromagnetic spectrum. Many γ-ray flares are highly correlated with flares detected at longer wavelengths; however, a small subset appears to occur in isolation, with little or no correlated variability at longer wavelengths. These “orphan” γ-ray flares challenge current models of blazar variability, most of which are unable to reproduce this type of behavior. MacDonald et al. have developed the Ring of Fire model to explain the origin of orphan γ-ray flares from within blazar jets. In this model, electrons contained within a blob of plasma moving relativistically along the spine of the jet inverse-Compton scatter synchrotron photons emanating off of a ring of shocked sheath plasma that enshrouds the jet spine. As the blob propagates through the ring, the scattering of the ring photons by the blob electrons creates an orphan γ-ray flare. This model was successfully applied to modeling a prominent orphan γ-ray flare observed in the blazar PKS 1510-089. To further support the plausibility of this model, MacDonald et al. presented a stacked radio map of PKS 1510-089 containing the polarimetric signature of a sheath of plasma surrounding the spine of the jet. In this paper, we extend our modeling and stacking techniques to a larger sample of blazars: 3C 273, 4C 71.01, 3C 279, 1055+018, CTA 102, and 3C 345, the majority of which have exhibited orphan γ-ray flares. We find that the model can successfully reproduce these flares, while our stacked maps reveal the existence of jet sheaths within these blazars.

  1. Aqueous stress-corrosion cracking of high-toughness D6AC steel

    NASA Technical Reports Server (NTRS)

    Gilbreath, W. P.; Adamson, M. J.

    1976-01-01

    The crack growth behavior of D6AC steel as a function of stress intensity, stress and corrosion history, and test technique, under sustained load in filtered natural seawater, 3.3 per cent sodium chloride solution, and distilled water, was investigated. Reported investigations of D6AC were considered in terms of the present study with emphasis on thermal treatment, specimen configuration, fracture toughness, crack-growth rates, initiation period, and threshold. Both threshold and growth kinetics were found to be relatively insensitive to these test parameters. The apparent incubation period was dependent on technique, both detection sensitivity and precracking stress intensity level.

  2. Impact of view reduction in CT on radiation dose for patients

    NASA Astrophysics Data System (ADS)

    Parcero, E.; Flores, L.; Sánchez, M. G.; Vidal, V.; Verdú, G.

    2017-08-01

    Iterative methods have become a hot topic of research in computed tomography (CT) imaging because of their capacity to resolve the reconstruction problem from a limited number of projections. This allows the reduction of radiation exposure on patients during the data acquisition. The reconstruction time and the high radiation dose imposed on patients are the two major drawbacks in CT. To solve them effectively we adapted the method for sparse linear equations and sparse least squares (LSQR) with soft threshold filtering (STF) and the fast iterative shrinkage-thresholding algorithm (FISTA) to computed tomography reconstruction. The feasibility of the proposed methods is demonstrated numerically.

  3. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    PubMed

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC task is assumed to choose the interval in which the greatest number of events occurred or randomly chooses among intervals which are tied for the greatest number of events. The subject is further assumed to count events over the duration of an evaluation interval that has the same timing and duration as the expected stimulus. The increase in the rate of the events caused by stimulation is proportional to the time-varying amplitude envelope of the bandpass-filtered signal raised to an exponent. We find the exponent to be about 3, consistent with our previous studies. This challenges models that are based on the assumption of the integration of a neural response that is directly proportional to the stimulus amplitude or proportional to its square (i.e., proportional to the stimulus intensity or power). Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A Pole-Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data

    NASA Astrophysics Data System (ADS)

    Lyon, Richard F.

    2011-11-01

    A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.

  5. A multi-directional and multi-scale roughness filter to detect lineament segments on digital elevation models - analyzing spatial objects in R

    NASA Astrophysics Data System (ADS)

    Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke

    2016-04-01

    Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map of these digital elevation models. The lineament segments trace the relief structure to a great extent and the calculated roughness parameter represents the physical geometry of the digital elevation model. Modifying the threshold for the surface roughness value highlights different distinct relief structures. Also the neighborhood size at which lineament segments are detected correspond with the width of the surface structure and may be a useful additional parameter for further analysis. The discrimination of concave and convex relief structures perfectly matches with valleys and ridges of the surface.

  6. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  7. Comet Hartley 2 Looms Large in the Sky

    NASA Image and Video Library

    2010-11-03

    NASA EPOXI mission took this image of comet Hartley 2 on Nov. 2, 2010. The spacecraft will fly by the comet on Nov. 4, 2010. The white blob and the halo around it are the comet outer cloud of gas and dust, called a coma.

  8. Mathematical modeling and simulation of aquatic and aerial animal locomotion

    NASA Astrophysics Data System (ADS)

    Hou, T. Y.; Stredie, V. G.; Wu, T. Y.

    2007-08-01

    In this paper, we investigate the locomotion of fish and birds by applying a new unsteady, flexible wing theory that takes into account the strong nonlinear dynamics semi-analytically. We also make extensive comparative study between the new approach and the modified vortex blob method inspired from Chorin's and Krasny's work. We first implement the modified vortex blob method for two examples and then discuss the numerical implementation of the nonlinear analytical mathematical model of Wu. We will demonstrate that Wu's method can capture the nonlinear effects very well by applying it to some specific cases and by comparing with the experiments available. In particular, we apply Wu's method to analyze Wagner's result for a wing abruptly undergoing an increase in incidence angle. Moreover, we study the vorticity generated by a wing in heaving, pitching and bending motion. In both cases, we show that the new method can accurately represent the vortex structure behind a flying wing and its influence on the bound vortex sheet on the wing.

  9. A Multi-Wavelength Survey of Intermediate-Mass Star-Forming Regions

    NASA Astrophysics Data System (ADS)

    Lundquist, Michael J.; Kobulnicky, Henry A.; Kerton, Charles R.

    2015-01-01

    Current research into Galactic star formation has focused on either massive star-forming regions or nearby low-mass regions. We present results from a survey of Galactic intermediate-mass star-forming regions (IM SFRs). These regions were selected from IRAS colors that specify cool dust and large PAH contribution, suggesting that they produce stars up to but not exceeding about 8 solar masses. Using WISE data we have classified 984 candidate IM SFRs as star-like objects, galaxies, filamentary structures, or blobs/shells based on their mid-infrared morphologies. Focusing on the blobs/shells, we combined follow-up observations of deep near-infrared (NIR) imaging with optical and NIR spectroscopy to study the stellar content, confirming the intermediate-mass nature of these regions. We also gathered CO data from OSO and APEX to study the molecular content and dynamics of these regions. We compare these results to those of high-mass star formation in order to better understand their role in the star-formation paradigm.

  10. Modeling of Turbulence Effects on Liquid Jet Atomization and Breakup

    NASA Technical Reports Server (NTRS)

    Trinh, Huu P.; Chen, C. P.

    2005-01-01

    Recent experimental investigations and physical modeling studies have indicated that turbulence behaviors within a liquid jet have considerable effects on the atomization process. This study aims to model the turbulence effect in the atomization process of a cylindrical liquid jet. Two widely used models, the Kelvin-Helmholtz (KH) instability of Reitz (blob model) and the Taylor-Analogy-Breakup (TAB) secondary droplet breakup by O Rourke et al, are further extended to include turbulence effects. In the primary breakup model, the level of the turbulence effect on the liquid breakup depends on the characteristic scales and the initial flow conditions. For the secondary breakup, an additional turbulence force acted on parent drops is modeled and integrated into the TAB governing equation. The drop size formed from this breakup regime is estimated based on the energy balance before and after the breakup occurrence. This paper describes theoretical development of the current models, called "T-blob" and "T-TAB", for primary and secondary breakup respectivety. Several assessment studies are also presented in this paper.

  11. Numerical Modeling of Turbulence Effects within an Evaporating Droplet in Atomizing Sprays

    NASA Technical Reports Server (NTRS)

    Balasubramanyam, M. S.; Chen, C. P.; Trinh, H. P.

    2006-01-01

    A new approach to account for finite thermal conductivity and turbulence effects within atomizing liquid sprays is presented in this paper. The model is an extension of the T-blob and T-TAB atomization/spray model of Trinh and Chen (2005). This finite conductivity model is based on the two-temperature film theory, where the turbulence characteristics of the droplet are used to estimate the effective thermal diffhsivity within the droplet phase. Both one-way and two-way coupled calculations were performed to investigate the performance of this model. The current evaporation model is incorporated into the T-blob atomization model of Trinh and Chen (2005) and implemented in an existing CFD Eulerian-Lagrangian two-way coupling numerical scheme. Validation studies were carried out by comparing with available evaporating atomization spray experimental data in terms of jet penetration, temperature field, and droplet SMD distribution within the spray. Validation results indicate the superiority of the finite-conductivity model in low speed parallel flow evaporating spray.

  12. Blob-Spring Model for the Dynamics of Ring Polymer in Obstacle Environment

    NASA Astrophysics Data System (ADS)

    Lele, Ashish K.; Iyer, Balaji V. S.; Juvekar, Vinay A.

    2008-07-01

    The dynamical behavior of cyclic macromolecules in a fixed obstacle (FO) environment is very different than the behavior of linear chains in the same topological environment; while the latter relax by a snake-like reptational motion from their chain ends the former can relax only by contour length fluctuations since they are endless. Duke, Obukhov and Rubinstein proposed a scaling model (the DOR model) to interpret the dynamical scaling exponents shown by Monte Carlo simulations of rings in a FO environment. We present a model (blob-spring model) to describe the dynamics of flexible and non-concatenated ring polymer in FO environment based on a theoretical formulation developed for the dynamics of an unentangled fractal polymer. We argue that the perpetual evolution of ring perimeter by the motion of contour segments results in an extra frictional load. Our model predicts self-similar dynamics with scaling exponents for the molecular weight dependence of diffusion coefficient and relaxation times that are in agreement with the scaling model proposed by Obukhov et al.

  13. Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow

    NASA Astrophysics Data System (ADS)

    Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar

    2018-03-01

    Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.

  14. The simulation of magnetic resonance elastography through atherosclerosis.

    PubMed

    Thomas-Seale, L E J; Hollis, L; Klatt, D; Sack, I; Roberts, N; Pankaj, P; Hoskins, P R

    2016-06-14

    The clinical diagnosis of atherosclerosis via the measurement of stenosis size is widely acknowledged as an imperfect criterion. The vulnerability of an atherosclerotic plaque to rupture is associated with its mechanical properties. The potential to image these mechanical properties using magnetic resonance elastography (MRE) was investigated through synthetic datasets. An image of the steady state wave propagation, equivalent to the first harmonic, can be extracted directly from finite element analysis. Inversion of this displacement data yields a map of the shear modulus, known as an elastogram. The variation of plaque composition, stenosis size, Gaussian noise, filter thresholds and excitation frequency were explored. A decreasing mean shear modulus with an increasing lipid composition was identified through all stenosis sizes. However the inversion algorithm showed sensitivity to parameter variation leading to artefacts which disrupted both the elastograms and quantitative trends. As noise was increased up to a realistic level, the contrast was maintained between the fully fibrous and lipid plaques but lost between the interim compositions. Although incorporating a Butterworth filter improved the performance of the algorithm, restrictive filter thresholds resulted in a reduction of the sensitivity of the algorithm to composition and noise variation. Increasing the excitation frequency improved the techniques ability to image the magnitude of the shear modulus and identify a contrast between compositions. In conclusion, whilst the technique has the potential to image the shear modulus of atherosclerotic plaques, future research will require the integration of a heterogeneous inversion algorithm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A Temporal Model of Level-Invariant, Tone-in-Noise Detection

    ERIC Educational Resources Information Center

    Berg, Bruce G.

    2004-01-01

    Level-invariant detection refers to findings that thresholds in tone-in-noise detection are unaffected by roving-level procedures that degrade energy cues. Such data are inconsistent with ideas that detection is based on the energy passed by an auditory filter. A hypothesis that detection is based on a level-invariant temporal cue is advanced.…

  16. Fast microcalcification detection in ultrasound images using image enhancement and threshold adjacency statistics

    NASA Astrophysics Data System (ADS)

    Cho, Baek Hwan; Chang, Chuho; Lee, Jong-Ha; Ko, Eun Young; Seong, Yeong Kyeong; Woo, Kyoung-Gu

    2013-02-01

    The existence of microcalcifications (MCs) is an important marker of malignancy in breast cancer. In spite of the benefits in mass detection for dense breasts, ultrasonography is believed that it might not reliably detect MCs. For computer aided diagnosis systems, however, accurate detection of MCs has the possibility of improving the performance in both Breast Imaging-Reporting and Data System (BI-RADS) lexicon description for calcifications and malignancy classification. We propose a new efficient and effective method for MC detection using image enhancement and threshold adjacency statistics (TAS). The main idea of TAS is to threshold an image and to count the number of white pixels with a given number of adjacent white pixels. Our contribution is to adopt TAS features and apply image enhancement to facilitate MC detection in ultrasound images. We employed fuzzy logic, tophat filter, and texture filter to enhance images for MCs. Using a total of 591 images, the classification accuracy of the proposed method in MC detection showed 82.75%, which is comparable to that of Haralick texture features (81.38%). When combined, the performance was as high as 85.11%. In addition, our method also showed the ability in mass classification when combined with existing features. In conclusion, the proposed method exploiting image enhancement and TAS features has the potential to deal with MC detection in ultrasound images efficiently and extend to the real-time localization and visualization of MCs.

  17. Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates

    PubMed Central

    Malone, Brian J.

    2017-01-01

    Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194

  18. PubMed search filters for the study of putative outdoor air pollution determinants of disease

    PubMed Central

    Curti, Stefania; Gori, Davide; Di Gregori, Valentina; Farioli, Andrea; Baldasseroni, Alberto; Fantini, Maria Pia; Christiani, David C; Violante, Francesco S; Mattioli, Stefano

    2016-01-01

    Objectives Several PubMed search filters have been developed in contexts other than environmental. We aimed at identifying efficient PubMed search filters for the study of environmental determinants of diseases related to outdoor air pollution. Methods We compiled a list of Medical Subject Headings (MeSH) and non-MeSH terms seeming pertinent to outdoor air pollutants exposure as determinants of diseases in the general population. We estimated proportions of potentially pertinent articles to formulate two filters (one ‘more specific’, one ‘more sensitive’). Their overall performance was evaluated as compared with our gold standard derived from systematic reviews on diseases potentially related to outdoor air pollution. We tested these filters in the study of three diseases potentially associated with outdoor air pollution and calculated the number of needed to read (NNR) abstracts to identify one potentially pertinent article in the context of these diseases. Last searches were run in January 2016. Results The ‘more specific’ filter was based on the combination of terms that yielded a threshold of potentially pertinent articles ≥40%. The ‘more sensitive’ filter was based on the combination of all search terms under study. When compared with the gold standard, the ‘more specific’ filter reported the highest specificity (67.4%; with a sensitivity of 82.5%), while the ‘more sensitive’ one reported the highest sensitivity (98.5%; with a specificity of 47.9%). The NNR to find one potentially pertinent article was 1.9 for the ‘more specific’ filter and 3.3 for the ‘more sensitive’ one. Conclusions The proposed search filters could help healthcare professionals investigate environmental determinants of medical conditions that could be potentially related to outdoor air pollution. PMID:28003291

  19. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  20. Neuromorphic Kalman filter implementation in IBM’s TrueNorth

    NASA Astrophysics Data System (ADS)

    Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.

    2017-10-01

    Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.

  1. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei

    2013-08-01

    We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.

  2. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  3. Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.

    PubMed

    Bissmeyer, Susan R S; Goldsworthy, Raymond L

    2017-09-01

    Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.

  4. Comparative Study of Speckle Filtering Methods in PolSAR Radar Images

    NASA Astrophysics Data System (ADS)

    Boutarfa, S.; Bouchemakh, L.; Smara, Y.

    2015-04-01

    Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.

  5. Absolute auditory threshold: testing the absolute.

    PubMed

    Heil, Peter; Matysiak, Artur

    2017-11-02

    The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Percolation thresholds and fractal dimensions for square and cubic lattices with long-range correlated defects

    NASA Astrophysics Data System (ADS)

    Zierenberg, Johannes; Fricke, Niklas; Marenz, Martin; Spitzner, F. P.; Blavatska, Viktoria; Janke, Wolfhard

    2017-12-01

    We study long-range power-law correlated disorder on square and cubic lattices. In particular, we present high-precision results for the percolation thresholds and the fractal dimension of the largest clusters as a function of the correlation strength. The correlations are generated using a discrete version of the Fourier filtering method. We consider two different metrics to set the length scales over which the correlations decay, showing that the percolation thresholds are highly sensitive to such system details. By contrast, we verify that the fractal dimension df is a universal quantity and unaffected by the choice of metric. We also show that for weak correlations, its value coincides with that for the uncorrelated system. In two dimensions we observe a clear increase of the fractal dimension with increasing correlation strength, approaching df→2 . The onset of this change does not seem to be determined by the extended Harris criterion.

  7. Five dimensional microstate geometries

    NASA Astrophysics Data System (ADS)

    Wang, Chih-Wei

    In this thesis, we discuss the possibility of exploring the statistical mechanics description of a black hole from the point view of supergravity. Specifically, we study five dimensional microstate geometries of a black hole or black ring. At first, we review the method to find the general three-charge BPS supergravity solutions proposed by Bena and Warner. By applying this method, we show the classical merger of a black ring and black hole on [Special characters omitted.] base space in general are irreversible. On the other hand, we review the solutions on ambi-polar Gibbons-Hawking (GH) base which are bubbled geometries. There are many possible microstate geometries among the bubbled geometries. Particularly, we show that a generic blob of GH points that satisfy certain conditions can be either microstate geometry of a black hole or black ring without horizon. Furthermore, using the result of the entropy analysis in classical merger as a guide, we show that one can have a merger of a black-hole blob and a black-ring blob or two black-ring blobs that corresponds to a classical irreversible merger. From the irreversible mergers, we find the scaling solutions and deep microstates which are microstate geometries of a black hole/ring with macroscopic horizon. These solutions have the same AdS throats as classical black holes/rings but instead of having infinite throats, the throat is smoothly capped off at a very large depth with some local structure at the bottom. For solutions that produced from U (1) × U (1) invariant merger, the depth of the throat is limited by flux quantization. The mass gap is related with the depth of this throat and we show the mass gap of these solutions roughly match with the mass gap of the typical conformal-field-theory (CFT) states. Therefore, based on AdS/CFT correspondence, they can be dual geometries of the typical CFT states that contribute to the entropy of a black hole/ring. On the other hand, we show that for the solutions produced from more general merger (without U (1) × U (1) invariance), the throat can be arbitrarily deep. This presents a puzzle from the point view of AdS/CFT correspondence. We propose that this puzzle may be solved by some quantization of the angle or promoting the flux vectors to quantum spins. Finally, we suggest some future directions of further study including the puzzle of arbitrary long AdS throat and a general coarse-graining picture of microstate geometries.

  8. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  9. Track Detection in Railway Sidings Based on MEMS Gyroscope Sensors

    PubMed Central

    Broquetas, Antoni; Comerón, Adolf; Gelonch, Antoni; Fuertes, Josep M.; Castro, J. Antonio; Felip, Damià; López, Miguel A.; Pulido, José A.

    2012-01-01

    The paper presents a two-step technique for real-time track detection in single-track railway sidings using low-cost MEMS gyroscopes. The objective is to reliably know the path the train has taken in a switch, diverted or main road, immediately after the train head leaves the switch. The signal delivered by the gyroscope is first processed by an adaptive low-pass filter that rejects noise and converts the temporal turn rate data in degree/second units into spatial turn rate data in degree/meter. The conversion is based on the travelled distance taken from odometer data. The filter is implemented to achieve a speed-dependent cut-off frequency to maximize the signal-to-noise ratio. Although direct comparison of the filtered turn rate signal with a predetermined threshold is possible, the paper shows that better detection performance can be achieved by processing the turn rate signal with a filter matched to the rail switch curvature parameters. Implementation aspects of the track detector have been optimized for real-time operation. The detector has been tested with both simulated data and real data acquired in railway campaigns. PMID:23443376

  10. Parametric adaptive filtering and data validation in the bar GW detector AURIGA

    NASA Astrophysics Data System (ADS)

    Ortolan, A.; Baggio, L.; Cerdonio, M.; Prodi, G. A.; Vedovato, G.; Vitale, S.

    2002-04-01

    We report on our experience gained in the signal processing of the resonant GW detector AURIGA. Signal amplitude and arrival time are estimated by means of a matched-adaptive Wiener filter. The detector noise, entering in the filter set-up, is modelled as a parametric ARMA process; to account for slow non-stationarity of the noise, the ARMA parameters are estimated on an hourly basis. A requirement of the set-up of an unbiased Wiener filter is the separation of time spans with 'almost Gaussian' noise from non-Gaussian and/or strongly non-stationary time spans. The separation algorithm consists basically of a variance estimate with the Chauvenet convergence method and a threshold on the Curtosis index. The subsequent validation of data is strictly connected with the separation procedure: in fact, by injecting a large number of artificial GW signals into the 'almost Gaussian' part of the AURIGA data stream, we have demonstrated that the effective probability distributions of the signal-to-noise ratio χ2 and the time of arrival are those that are expected.

  11. Performance of zeolite ceramic membrane synthesized by wet mixing method as methylene blue dye wastewater filter

    NASA Astrophysics Data System (ADS)

    Masturi; Widodo, R. D.; Edie, S. S.; Amri, U.; Sidiq, A. L.; Alighiri, D.; Wulandari, N. A.; Susilawati; Amanah, S. N.

    2018-03-01

    Problem of pollution in water continues in Indonesia, with its manufacturing sector as biggest contributor to economic growth. One out of many technological solutions is post-treating industrial wastewater by membrane filtering technology. We presented a result of our fabrication of ceramic membrane made from zeolite with simple mixing and he. At 5% of (poring agent):(total weight), its permeability stays around 2.8 mD (10‑14m2) with slight variance around it, attributed to the mixture being in far below percolating threshold. All our membranes achieve remarkable above 90% rejection rate of methylene blue as solute waste in water solvent.

  12. SEMICONDUCTOR TECHNOLOGY A signal processing method for the friction-based endpoint detection system of a CMP process

    NASA Astrophysics Data System (ADS)

    Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang

    2010-12-01

    A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.

  13. Hierarchical faunal filters: An approach to assessing effects of habitat and nonnative species on native fishes

    USGS Publications Warehouse

    Quist, M.C.; Rahel, F.J.; Hubert, W.A.

    2005-01-01

    Understanding factors related to the occurrence of species across multiple spatial and temporal scales is critical to the conservation and management of native fishes, especially for those species at the edge of their natural distribution. We used the concept of hierarchical faunal filters to provide a framework for investigating the influence of habitat characteristics and normative piscivores on the occurrence of 10 native fishes in streams of the North Platte River watershed in Wyoming. Three faunal filters were developed for each species: (i) large-scale biogeographic, (ii) local abiotic, and (iii) biotic. The large-scale biogeographic filter, composed of elevation and stream-size thresholds, was used to determine the boundaries within which each species might be expected to occur. Then, a local abiotic filter (i.e., habitat associations), developed using binary logistic-regression analysis, estimated the probability of occurrence of each species from features such as maximum depth, substrate composition, submergent aquatic vegetation, woody debris, and channel morphology (e.g., amount of pool habitat). Lastly, a biotic faunal filter was developed using binary logistic regression to estimate the probability of occurrence of each species relative to the abundance of nonnative piscivores in a reach. Conceptualising fish assemblages within a framework of hierarchical faunal filters is simple and logical, helps direct conservation and management activities, and provides important information on the ecology of fishes in the western Great Plains of North America. ?? Blackwell Munksgaard, 2004.

  14. Bacteria survival probability in bactericidal filter paper.

    PubMed

    Mansur-Azzam, Nura; Hosseinidoust, Zeinab; Woo, Su Gyeong; Vyhnalkova, Renata; Eisenberg, Adi; van de Ven, Theo G M

    2014-05-01

    Bactericidal filter papers offer the simplicity of gravity filtration to simultaneously eradicate microbial contaminants and particulates. We previously detailed the development of biocidal block copolymer micelles that could be immobilized on a filter paper to actively eradicate bacteria. Despite the many advantages offered by this system, its widespread use is hindered by its unknown mechanism of action which can result in non-reproducible outcomes. In this work, we sought to investigate the mechanism by which a certain percentage of Escherichia coli cells survived when passing through the bactericidal filter paper. Through the process of elimination, the possibility that the bacterial survival probability was controlled by the initial bacterial load or the existence of resistant sub-populations of E. coli was dismissed. It was observed that increasing the thickness or the number of layers of the filter significantly decreased bacterial survival probability for the biocidal filter paper but did not affect the efficiency of the blank filter paper (no biocide). The survival probability of bacteria passing through the antibacterial filter paper appeared to depend strongly on the number of collision between each bacterium and the biocide-loaded micelles. It was thus hypothesized that during each collision a certain number of biocide molecules were directly transferred from the hydrophobic core of the micelle to the bacterial lipid bilayer membrane. Therefore, each bacterium must encounter a certain number of collisions to take up enough biocide to kill the cell and cells that do not undergo the threshold number of collisions are expected to survive. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Infant Memory for Primitive Perceptual Features.

    ERIC Educational Resources Information Center

    Adler, Scott A.

    Textons are elongated blobs of specific color, angular orientation, ends of lines, and crossings of line segments that are proposed to be the perceptual building blocks of the visual system. A study was conducted to explore the relative memorability of different types and arrangements of textons, exploring the time course for the discrimination…

  16. Adapting My Weather Impacts Decision Aid (MyWIDA) to Additional Web Application Server Technologies

    DTIC Science & Technology

    2015-08-01

    Oracle. Further, some datatypes that are used with HSQLDB are not compatible with Oracle. Most notably, the VARBINARY datatype is not compatible...with Oracle, and instead was replaced with the Binary Large Object (BLOB) datatype . Oracle additionally has a different implementation of primary keys

  17. Blowing Away Bennett's Blob.

    ERIC Educational Resources Information Center

    Bridgman, Anne

    1987-01-01

    Bureau of Labor statistics prove that schools are not top-heavy with administrators, contrary to the myth and Secretary William Bennett's assertion. Administrators comprise 6.6 percent of school employees and public education ranks 28 out of 35 occupations in terms of the percentage of administrative personnel. Accounting and bookkeeping lead with…

  18. Tracking Streamer Blobs Into the Heliosphere

    DTIC Science & Technology

    2010-05-20

    SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 10 19a. NAME OF RESPONSIBLE PERSON a. REPORT...NASA/GSFC (US), RAL (UK), UB- HAM (UK), MPS (Germany), CSL (Belgium), IOTA (France), and IAS (France). In the US, funding was provided by NASA, in the

  19. Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation

    DTIC Science & Technology

    2012-09-01

    interpreting the state vector as the health indicator and a threshold is used on this variable in order to compute EOL (end-of-life) and RUL. Here, we...End-of-life ( EOL ) would match the true spread and would not change from one experiment to another. This is, however, in practice impossible to achieve

  20. The role of suppression in amblyopia.

    PubMed

    Li, Jingrong; Thompson, Benjamin; Lam, Carly S Y; Deng, Daming; Chan, Lily Y L; Maehara, Goro; Woo, George C; Yu, Minbin; Hess, Robert F

    2011-06-13

    This study had three main goals: to assess the degree of suppression in patients with strabismic, anisometropic, and mixed amblyopia; to establish the relationship between suppression and the degree of amblyopia; and to compare the degree of suppression across the clinical subgroups within the sample. Using both standard measures of suppression (Bagolini lenses and neutral density [ND] filters, Worth 4-Dot test) and a new approach involving the measurement of dichoptic motion thresholds under conditions of variable interocular contrast, the degree of suppression in 43 amblyopic patients with strabismus, anisometropia, or a combination of both was quantified. There was good agreement between the quantitative measures of suppression made with the new dichoptic motion threshold technique and measurements made with standard clinical techniques (Bagolini lenses and ND filters, Worth 4-Dot test). The degree of suppression was found to correlate directly with the degree of amblyopia within our clinical sample, whereby stronger suppression was associated with a greater difference in interocular acuity and poorer stereoacuity. Suppression was not related to the type or angle of strabismus when this was present or the previous treatment history. These results suggest that suppression may have a primary role in the amblyopia syndrome and therefore have implications for the treatment of amblyopia.

  1. Mineral mapping in the Maherabad area, eastern Iran, using the HyMap remote sensing data

    NASA Astrophysics Data System (ADS)

    Molan, Yusuf Eshqi; Refahi, Davood; Tarashti, Ali Hoseinmardi

    2014-04-01

    This study applies matched filtering on the HyMap airborne hyperspectral data to obtain the distribution map of alteration minerals in the Maherabad area and uses virtual verification to verify the results. This paper also introduces "moving threshold" which tries to find an appropriate threshold value to convert gray scale images, produced by mapping methods, to target and background pixels. The Maherabad area, located in the eastern part of the Lut block, is a Cu-Au porphyry system in which quartz-sericite-pyrite, argillic and propylitic alteration are most common. Minimum noise fraction transform coupled with a pixel purity index was applied on the HyMap images to extract the endmembers of the alteration minerals, including kaolinite, montmorillonite, sericite (muscovite/illite), calcite, chlorite, epidote, and goethite. Since there was no access to any portable spectrometer and/or lab spectral measurements for the verification of the remote sensing imagery results, virtual verification achieved using the USGS spectral library and showed an agreement of 83.19%. The comparison between the results of the matched filtering and X-ray diffraction (XRD) analyses also showed an agreement of 56.13%.

  2. Sensitivity to envelope-based interaural delays at high frequencies: center frequency affects the envelope rate-limitation.

    PubMed

    Bernstein, Leslie R; Trahiotis, Constantine

    2014-02-01

    Sensitivity to ongoing interaural temporal disparities (ITDs) was measured using bandpass-filtered pulse trains centered at 4600, 6500, or 9200 Hz. Save for minor differences in the exact center frequencies, those target stimuli were those employed by Majdak and Laback [J. Acoust. Soc. Am. 125, 3903-3913 (2009)]. At each center frequency, threshold ITD was measured for pulse repetition rates ranging from 64 to 609 Hz. The results and quantitative predictions by a cross-correlation-based model indicated that (1) at most pulse repetition rates, threshold ITD increased with center frequency, (2) the cutoff frequency of the putative envelope low-pass filter that determines sensitivity to ITD at high envelope rates appears to be inversely related to center frequency, and (3) both outcomes were accounted for by assuming that, independent of the center frequency, the listeners' decision variable was a constant criterion change in interaural correlation of the stimuli as processed internally. The finding of an inverse relation between center frequency and the envelope rate limitation, while consistent with much prior literature, runs counter to the conclusion reached by Majdak and Laback.

  3. Discrimination of nonlinear frequency glides.

    PubMed

    Thyer, Nick; Mahar, Doug

    2006-05-01

    Discrimination thresholds for short duration nonlinear tone glides that differed in glide rate were measured in order to determine whether cues related to rate of frequency change alone were sufficient for discrimination. Thresholds for rising and falling nonlinear glides of 50-ms and 400-ms duration, spanning three frequency excursions (0.5, 1, and 2 ERBs) at three center frequencies (0.5, 2.0, and 6.0 kHz) were measured. Results showed that glide discrimination was possible when duration and initial and final frequencies were identical. Thresholds were of a different order to those found in previous studies using linear frequency glides where endpoint frequency or duration information is available as added cues. The pattern of results was suggestive of a mechanism sensitive to spectral changes in time. Thresholds increased as the rate of transition span increased, particularly above spans of 1 ERB. The Weber fraction associated with these changes was 0.6-0.7. Overall, the results were consistent with an excitation pattern model of nonlinear glide detection that has difficulty in tracking signals with rapid frequency changes that exceed the width of an auditory filter and are of short duration.

  4. Circular lasers for telecommunications and rf/photonics applications

    NASA Astrophysics Data System (ADS)

    Griffel, Giora

    2000-04-01

    Following a review of ring resonator research in the past decade we shall report a novel bi-level etching technique that permits the use of standard photolithography for coupling to deeply-etched ring resonator structures. The technique is employed to demonstrate InGaAsP laterally- coupled racetrack ring resonators laser with record low threshold currents of 66 mA. The racetrack laser have curved sections of 150 micrometers radius with negligible bending loss. The lasers operate CW single mode up to nearly twice threshold with a 26 dB side-mode-suppression ratio. We shall also present a transfer matrix formalism for the analysis of ring resonator arrays and indicate application examples for flat band filter synthesis.

  5. Centrifugal unbalance detection system

    DOEpatents

    Cordaro, Joseph V.; Reeves, George; Mets, Michael

    2002-01-01

    A system consisting of an accelerometer sensor attached to a centrifuge enclosure for sensing vibrations and outputting a signal in the form of a sine wave with an amplitude and frequency that is passed through a pre-amp to convert it to a voltage signal, a low pass filter for removing extraneous noise, an A/D converter and a processor and algorithm for operating on the signal, whereby the algorithm interprets the amplitude and frequency associated with the signal and once an amplitude threshold has been exceeded the algorithm begins to count cycles during a predetermined time period and if a given number of complete cycles exceeds the frequency threshold during the predetermined time period, the system shuts down the centrifuge.

  6. Method for extracting long-equivalent wavelength interferometric information

    NASA Technical Reports Server (NTRS)

    Hochberg, Eric B. (Inventor)

    1991-01-01

    A process for extracting long-equivalent wavelength interferometric information from a two-wavelength polychromatic or achromatic interferometer. The process comprises the steps of simultaneously recording a non-linear sum of two different frequency visible light interferograms on a high resolution film and then placing the developed film in an optical train for Fourier transformation, low pass spatial filtering and inverse transformation of the film image to produce low spatial frequency fringes corresponding to a long-equivalent wavelength interferogram. The recorded non-linear sum irradiance derived from the two-wavelength interferometer is obtained by controlling the exposure so that the average interferogram irradiance is set at either the noise level threshold or the saturation level threshold of the film.

  7. Sensor fusion using a hybrid median filter for artifact removal in intraoperative heart rate monitoring.

    PubMed

    Yang, Ping; Dumont, Guy A; Ansermino, J Mark

    2009-04-01

    Intraoperative heart rate is routinely measured independently from the ECG monitor, pulse oximeter, and the invasive blood pressure monitor if available. The presence of artifacts, in one or more of theses signals, especially sustained artifacts, represents a critical challenge for physiological monitoring. When temporal filters are used to suppress sustained artifacts, unwanted delays or signal distortion are often introduced. The aim of this study was to remove artifacts and derive accurate estimates for the heart rate signal by using measurement redundancy. Heart rate measurements from multiple sensors and previous estimates that fall in a short moving window were treated as samples of the same heart rate. A hybrid median filter was used to align these samples into one ordinal series and to select the median as the fused estimate. This method can successfully remove artifacts that are sustained for shorter than half the length of the filter window, or artifacts that are sustained for a longer duration but presented in no more than half of the sensors. The method was tested on both simulated and clinical cases. The performance of the hybrid median filter in the simulated study was compared with that of a two-step estimation process, comprising a threshold-controlled artifact-removal module and a Kalman filter. The estimation accuracy of the hybrid median filter is better than that of the Kalman filter in the presence of artifacts. The hybrid median filter combines the structural and temporal information from two or more sensors and generates a robust estimate of heart rate without requiring strict assumptions about the signal's characteristics. This method is intuitive, computationally simple, and the performance can be easily adjusted. These considerable benefits make this method highly suitable for clinical use.

  8. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    PubMed

    Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad

    2018-01-01

    The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  9. A lysimeter-based approach to quantify the impact of climate change on soil hydrological processes

    NASA Astrophysics Data System (ADS)

    Slawitsch, Veronika; Steffen, Birk; Herndl, Markus

    2016-04-01

    The predicted climate change involving increasing CO2 concentrations and increasing temperatures will have effects on both vegetation and soil properties and thus on the soil water balance. The aim of this work is to quantify the effects of changes in these climatic factors on soil hydrological processes and parameters. For this purpose data of six high precision weighable lysimeters will be used. The lysimeters are part of a Lysi-T-FACE concept, where free-air will be enriched with CO2 (FACE-Technique) and infrared heaters heat the plots for investigation on effects of increasing temperatures (T-FACE-Technique). The Lysi-T-FACE concept was developed on the „Clim Grass Site" at the HBLFA Raumberg-Gumpenstein (Styria, Austria) in 2011 and 2012 with a total of 54 experimental plots. These include six plots with lysimeters where the two climatic factors are varied in different combinations. On the basis of these grass land lysimeters the soil hydraulic parameters under different experimental conditions will be investigated. The lysimeters are equipped with TDR-Trime sensors and temperature sensors combined with tensiometers in different depths. In addition, a mechanical separation snow cover system is implemented to obtain a correct water balance in winter. To be able to infer differences between the lysimeters reliably a verification of functionalities and a plausibility check of the data from the lysimeters as well as adequate data corrections are needed. Both an automatic and a user-defined control including the recently developed filter method AWAT (Adaptive Window and Adaptive Threshold Filter) are combined with a visualisation tool using the software NI DIAdem. For each lysimeter the raw data is classified in groups of matric potentials, soil water contents and lysimeter weights. Values exceeding technical thresholds are eliminated and marked automatically. The manual data control is employed every day to obtain high precision seepage water weights. The subsequent application of the AWAT Filter reduces up to 80% of the oscillations in the calculated precipitation and evapotranspiration. The filtered data of the reference plot in June 2014 yields a precipitation of about 100 mm, whereas the non-filtered raw data result in approximately 170 mm and thus an obvious overestimation of precipitation. The resulting evapotranspiration amounts to slightly more than 100 mm with filter and 200 mm without filter in the same time period. The total water balance (precipitation minus evapotranspiration) of the year 2014 obtained with the automatic and manual data filter is 470 mm on the reference plot but only 358 mm on a plot where CO2 is enriched and temperature increased. In summary, these first results demonstrate that an adequate data correction is the precondition to identify changes of soil hydrological processes and properties.

  10. K-feldspar megacryst accumulations formed by mechanical instabilities in magma chamber margins, Asha pluton, NW Argentina

    NASA Astrophysics Data System (ADS)

    Rocher, Sebastián; Alasino, Pablo H.; Grande, Marcos Macchioli; Larrovere, Mariano A.; Paterson, Scott R.

    2018-07-01

    The Asha pluton, the oldest unit of the San Blas intrusive complex (Early Carboniferous), exhibits impressive examples of magmatic structures formed by accumulation of K-feldspar megacrysts, enclaves, and schlieren. Almost all recognized structures are meter-scale, vertically elongate bodies of variable shapes defined as fingers, trails, drips, and blobs. They preferentially developed near the external margin of the Asha pluton and generally are superimposed by chamber-wide magmatic fabrics. They mostly have circular or sub-circular transverse sections with an internal fabric defined by margin-parallel, inward-dipping concentric foliation and steeply plunging lineation at upper parts and flat foliation at lower parts. The concentration of megacrysts usually grades from upper sections, where they appear in a proportion similar to the host granite, to highly packed accumulations of K-feldspar along with grouped flattened enclaves at lower ends. These features suggest an origin by downward localized multiphase magmatic flow, narrowing and 'log jamming', and gravitational sinking of grouped crystals and enclaves, with compaction and filter pressing as main mechanisms of melt removal. Crystal size distribution analysis supports field observations arguing for a mechanical origin of accumulations. The magmatic structures of the Asha pluton represent mechanical instabilities generated by thermal and compositional convection, probably owing to cooling and crystallization near the pluton margins during early stages of construction of the intrusive complex.

  11. Dense volumetric detection and segmentation of mediastinal lymph nodes in chest CT images

    NASA Astrophysics Data System (ADS)

    Oda, Hirohisa; Roth, Holger R.; Bhatia, Kanwal K.; Oda, Masahiro; Kitasaka, Takayuki; Iwano, Shingo; Homma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Schnabel, Julia A.; Mori, Kensaku

    2018-02-01

    We propose a novel mediastinal lymph node detection and segmentation method from chest CT volumes based on fully convolutional networks (FCNs). Most lymph node detection methods are based on filters for blob-like structures, which are not specific for lymph nodes. The 3D U-Net is a recent example of the state-of-the-art 3D FCNs. The 3D U-Net can be trained to learn appearances of lymph nodes in order to output lymph node likelihood maps on input CT volumes. However, it is prone to oversegmentation of each lymph node due to the strong data imbalance between lymph nodes and the remaining part of the CT volumes. To moderate the balance of sizes between the target classes, we train the 3D U-Net using not only lymph node annotations but also other anatomical structures (lungs, airways, aortic arches, and pulmonary arteries) that can be extracted robustly in an automated fashion. We applied the proposed method to 45 cases of contrast-enhanced chest CT volumes. Experimental results showed that 95.5% of lymph nodes were detected with 16.3 false positives per CT volume. The segmentation results showed that the proposed method can prevent oversegmentation, achieving an average Dice score of 52.3 +/- 23.1%, compared to the baseline method with 49.2 +/- 23.8%, respectively.

  12. ASSESSMENT OF LOW-FREQUENCY HEARING WITH NARROW-BAND CHIRP EVOKED 40-HZ SINUSOIDAL AUDITORY STEADY STATE RESPONSE

    PubMed Central

    Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.

    2016-01-01

    Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555

  13. Two-Stage Processing of Sounds Explains Behavioral Performance Variations due to Changes in Stimulus Contrast and Selective Attention: An MEG Study

    PubMed Central

    Kauramäki, Jaakko; Jääskeläinen, Iiro P.; Hänninen, Jarno L.; Auranen, Toni; Nummenmaa, Aapo; Lampinen, Jouko; Sams, Mikko

    2012-01-01

    Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds. PMID:23071654

  14. Computer-aided detection of lung nodules via 3D fast radial transform, scale space representation, and Zernike MIP classification.

    PubMed

    Riccardi, Alessandro; Petkov, Todor Sergueev; Ferri, Gianluca; Masotti, Matteo; Campanini, Renato

    2011-04-01

    The authors presented a novel system for automated nodule detection in lung CT exams. The approach is based on (1) a lung tissue segmentation preprocessing step, composed of histogram thresholding, seeded region growing, and mathematical morphology; (2) a filtering step, whose aim is the preliminary detection of candidate nodules (via 3D fast radial filtering) and estimation of their geometrical features (via scale space analysis); and (3) a false positive reduction (FPR) step, comprising a heuristic FPR, which applies thresholds based on geometrical features, and a supervised FPR, which is based on support vector machines classification, which in turn, is enhanced by a feature extraction algorithm based on maximum intensity projection processing and Zernike moments. The system was validated on 154 chest axial CT exams provided by the lung image database consortium public database. The authors obtained correct detection of 71% of nodules marked by all radiologists, with a false positive rate of 6.5 false positives per patient (FP/patient). A higher specificity of 2.5 FP/patient was reached with a sensitivity of 60%. An independent test on the ANODE09 competition database obtained an overall score of 0.310. The system shows a novel approach to the problem of lung nodule detection in CT scans: It relies on filtering techniques, image transforms, and descriptors rather than region growing and nodule segmentation, and the results are comparable to those of other recent systems in literature and show little dependency on the different types of nodules, which is a good sign of robustness.

  15. Attributing Tropical Cyclogenesis to Equatorial Waves in the Western North Pacific

    NASA Technical Reports Server (NTRS)

    Schreck, Carl J., III; Molinari, John; Mohr, Karen I.

    2009-01-01

    The direct influences of equatorial waves on the genesis of tropical cyclones are evaluated. Tropical cyclogenesis is attributed to an equatorial wave when the filtered rainfall anomaly exceeds a threshold value at the genesis location. For an attribution threshold of 3 mm/day, 51% of warm season western North Pacific tropical cyclones are attributed to tropical depression (TD)-type disturbances, 29% to equatorial Rossby waves, 26% to mixed Rossby-Gravity waves, 23% to Kelvin waves, 13% to the Madden-Julian oscillation (MJO), and 19% are not attributed to any equatorial wave. The fraction of tropical cyclones attributed to TD-type disturbances is consistent with previous findings. Past studies have also demonstrated that the MJO significantly modulates tropical cyclogenesis, but fewer storms are attributed to the MJO than any other wave type. This disparity arises from the difference between attribution and modulation. The MJO produces broad regions of favorable conditions for cyclogenesis, but the MJO alone might not determine when and where a storm will develop within these regions. Tropical cyclones contribute less than 17% of the power in any portion of the equatorial wave spectrum because tropical cyclones are relatively uncommon equatorward of 15deg latitude. In regions where they are active, however, tropical cyclones can contribute more than 20% of the warm season rainfall and up to 50% of the total variance. Tropical cyclone-related anomalies can significantly contaminate wave-filtered precipitation at the location of genesis. To mitigate this effect, the tropical cyclone-related rainfall anomalies were removed before filtering in this study.

  16. Robust extrema features for time-series data analysis.

    PubMed

    Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N

    2013-06-01

    The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.

  17. Nonlinear multilayers as optical limiters

    NASA Astrophysics Data System (ADS)

    Turner-Valle, Jennifer Anne

    1998-10-01

    In this work we present a non-iterative technique for computing the steady-state optical properties of nonlinear multilayers and we examine nonlinear multilayer designs for optical limiters. Optical limiters are filters with intensity-dependent transmission designed to curtail the transmission of incident light above a threshold irradiance value in order to protect optical sensors from damage due to intense light. Thin film multilayers composed of nonlinear materials exhibiting an intensity-dependent refractive index are used as the basis for optical limiter designs in order to enhance the nonlinear filter response by magnifying the electric field in the nonlinear materials through interference effects. The nonlinear multilayer designs considered in this work are based on linear optical interference filter designs which are selected for their spectral properties and electric field distributions. Quarter wave stacks and cavity filters are examined for their suitability as sensor protectors and their manufacturability. The underlying non-iterative technique used to calculate the optical response of these filters derives from recognizing that the multi-valued calculation of output irradiance as a function of incident irradiance may be turned into a single-valued calculation of incident irradiance as a function of output irradiance. Finally, the benefits and drawbacks of using nonlinear multilayer for optical limiting are examined and future research directions are proposed.

  18. Polarization division multiplexing for optical data communications

    NASA Astrophysics Data System (ADS)

    Ivanovich, Darko; Powell, Samuel B.; Gruev, Viktor; Chamberlain, Roger D.

    2018-02-01

    Multiple parallel channels are ubiquitous in optical communications, with spatial division multiplexing (separate physical paths) and wavelength division multiplexing (separate optical wavelengths) being the most common forms. Here, we investigate the viability of polarization division multiplexing, the separation of distinct parallel optical communication channels through the polarization properties of light. Two or more linearly polarized optical signals (at different polarization angles) are transmitted through a common medium, filtered using aluminum nanowire optical filters fabricated on-chip, and received using individual silicon photodetectors (one per channel). The entire receiver (including optics) is compatible with standard CMOS fabrication processes. The filter model is based upon an input optical signal formed as the sum of the Stokes vectors for each individual channel, transformed by the Mueller matrix that models the filter proper, resulting in an output optical signal that impinges on each photodiode. The results show that two- and three-channel systems can operate with a fixed-threshold comparator in the receiver circuit, but four-channel systems (and larger) will require channel coding of some form. For example, in the four-channel system, 10 of 16 distinct bit patterns are separable by the receiver. The model supports investigation of the range of variability tolerable in the fabrication of the on-chip polarization filters.

  19. The Role of Copy Number Variation in Susceptibility to Amyotrophic Lateral Sclerosis: Genome-Wide Association Study and Comparison with Published Loci

    PubMed Central

    Wain, Louise V.; Pedroso, Inti; Landers, John E.; Breen, Gerome; Shaw, Christopher E.; Leigh, P. Nigel; Brown, Robert H.

    2009-01-01

    Background The genetic contribution to sporadic amyotrophic lateral sclerosis (ALS) has not been fully elucidated. There are increasing efforts to characterise the role of copy number variants (CNVs) in human diseases; two previous studies concluded that CNVs may influence risk of sporadic ALS, with multiple rare CNVs more important than common CNVs. A little-explored issue surrounding genome-wide CNV association studies is that of post-calling filtering and merging of raw CNV calls. We undertook simulations to define filter thresholds and considered optimal ways of merging overlapping CNV calls for association testing, taking into consideration possibly overlapping or nested, but distinct, CNVs and boundary estimation uncertainty. Methodology and Principal Findings In this study we screened Illumina 300K SNP genotyping data from 730 ALS cases and 789 controls for copy number variation. Following quality control filters using thresholds defined by simulation, a total of 11321 CNV calls were made across 575 cases and 621 controls. Using region-based and gene-based association analyses, we identified several loci showing nominally significant association. However, the choice of criteria for combining calls for association testing has an impact on the ranking of the results by their significance. Several loci which were previously reported as being associated with ALS were identified here. However, of another 15 genes previously reported as exhibiting ALS-specific copy number variation, only four exhibited copy number variation in this study. Potentially interesting novel loci, including EEF1D, a translation elongation factor involved in the delivery of aminoacyl tRNAs to the ribosome (a process which has previously been implicated in genetic studies of spinal muscular atrophy) were identified but must be treated with caution due to concerns surrounding genomic location and platform suitability. Conclusions and Significance Interpretation of CNV association findings must take into account the effects of filtering and combining CNV calls when based on early genome-wide genotyping platforms and modest study sizes. PMID:19997636

  20. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.

Top